34 results on '"Arnaud Arindra Adiyoso Setio"'
Search Results
2. Class-Aware Adversarial Lung Nodule Synthesis In CT Images.
- Author
-
Jie Yang 0041, Siqi Liu 0001, Sasa Grbic, Arnaud Arindra Adiyoso Setio, Zhoubing Xu, Eli Gibson, Guillaume Chabin, Bogdan Georgescu, Andrew F. Laine, and Dorin Comaniciu
- Published
- 2019
- Full Text
- View/download PDF
3. Extracting and Leveraging Nodule Features with Lung Inpainting for Local Feature Augmentation.
- Author
-
Sebastian Gündel, Arnaud Arindra Adiyoso Setio, Sasa Grbic, Andreas Maier 0001, and Dorin Comaniciu
- Published
- 2020
4. No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks.
- Author
-
Siqi Liu 0001, Arnaud Arindra Adiyoso Setio, Florin C. Ghesu, Eli Gibson, Sasa Grbic, Bogdan Georgescu, and Dorin Comaniciu
- Published
- 2020
5. A survey on deep learning in medical image analysis.
- Author
-
Geert Litjens 0001, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen A. W. M. van der Laak, Bram van Ginneken, and Clara I. Sánchez
- Published
- 2017
- Full Text
- View/download PDF
6. Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge.
- Author
-
Arnaud Arindra Adiyoso Setio, Alberto Traverso, Thomas de Bel, Moira S. N. Berens, Cas van den Bogaard, Piergiorgio Cerello, Hao Chen 0011, Qi Dou 0001, Maria Evelina Fantacci, Bram Geurts, Robbert van der Gugten, Pheng-Ann Heng, Bart Jansen 0001, Michael M. J. de Kaste, Valentin Kotov, Jack Yu-Hung Lin, Jeroen T. M. C. Manders, Alexander Sóñora-Mengana, Juan Carlos García-Naranjo, Evgenia Papavasileiou, and Mathias Prokop
- Published
- 2017
- Full Text
- View/download PDF
7. Deep Learning Based Centerline-Aggregated Aortic Hemodynamics: An Efficient Alternative to Numerical Modeling of Hemodynamics
- Author
-
Heiko Ramm, Lina Gundelwein, Leonid Goubergrits, Alexander Meyer, Titus Kuehne, Pavlo Yevtushenko, Tobias Heimann, Arnaud Arindra Adiyoso Setio, Marie Schafstedde, and Hans Lamecker
- Subjects
Patient-Specific Modeling ,Relation (database) ,Computer science ,Hemodynamics ,Computational fluid dynamics ,Machine learning ,computer.software_genre ,Outcome (game theory) ,Deep Learning ,Resource (project management) ,Health Information Management ,Humans ,Computer Simulation ,Electrical and Electronic Engineering ,Aorta ,Artificial neural network ,business.industry ,Numerical analysis ,Deep learning ,Models, Cardiovascular ,Computer Science Applications ,Artificial intelligence ,business ,computer ,Biotechnology - Abstract
Image-based patient-specific modelling of hemodynamics are gaining increased popularity as a diagnosis and outcome prediction solution for a variety of cardiovascular diseases. While their potential to improve diagnostic capabilities and thereby clinical outcome is widely recognized, these methods require considerable computational resources since they are mostly based on conventional numerical methods such as computational fluid dynamics (CFD). As an alternative to the numerical methods, we propose a machine learning (ML) based approach to calculate patient-specific hemodynamic parameters. Compared to CFD based methods, our approach holds the benefit of being able to calculate a patient-specific hemodynamic outcome instantly with little need for computational power. In this proof-of-concept study, we present a deep artificial neural network (ANN) capable of computing hemodynamics for patients with aortic coarctation in a centerline aggregated (i.e., locally averaged) form. Considering the complex relation between vessels shape and hemodynamics on the one hand and the limited availability of suitable clinical data on the other, a sufficient accuracy of the ANN may however not be achieved with available data only. Another key aspect of this study is therefore the successful augmentation of available clinical data. Using a statistical shape model, additional training data was generated which substantially increased the ANN's accuracy, showcasing the ability of ML based methods to perform in-silico modelling tasks previously requiring resource intensive CFD simulations.
- Published
- 2022
- Full Text
- View/download PDF
8. Pulmonary Nodule Detection in CT Images: False Positive Reduction Using Multi-View Convolutional Networks.
- Author
-
Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Geert Litjens 0001, Paul K. Gerke, Colin Jacobs, Sarah J. van Riel, Mathilde Marie Winkler Wille, Matiullah Naqibullah, Clara I. Sánchez, and Bram van Ginneken
- Published
- 2016
- Full Text
- View/download PDF
9. Class-Aware Adversarial Lung Nodule Synthesis in CT Images.
- Author
-
Jie Yang 0041, Siqi Liu 0001, Sasa Grbic, Arnaud Arindra Adiyoso Setio, Zhoubing Xu, Eli Gibson, Guillaume Chabin, Bogdan Georgescu, Andrew F. Laine, and Dorin Comaniciu
- Published
- 2018
10. Decompose to manipulate: Manipulable Object Synthesis in 3D Medical Images with Structured Image Decomposition.
- Author
-
Siqi Liu 0001, Eli Gibson, Sasa Grbic, Zhoubing Xu, Arnaud Arindra Adiyoso Setio, Jie Yang 0041, Bogdan Georgescu, and Dorin Comaniciu
- Published
- 2018
11. Synthetic Database of Aortic Morphometry and Hemodynamics: Overcoming Medical Imaging Data Availability
- Author
-
Bente Thamsen, Leonid Goubergrits, Marie Schafstedde, Arnaud Arindra Adiyoso Setio, Pavlo Yevtushenko, Hans Lamecker, Tobias Heimann, Lina Gundelwein, Titus Kuehne, and Marcus Kelm
- Subjects
Heart disease ,Coarctation of the aorta ,Hemodynamics ,computer.software_genre ,Aortic Coarctation ,Synthetic data ,030218 nuclear medicine & medical imaging ,Data modeling ,03 medical and health sciences ,0302 clinical medicine ,Artificial Intelligence ,medicine.artery ,medicine ,Humans ,Electrical and Electronic Engineering ,Aorta ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,Database ,business.industry ,Models, Cardiovascular ,Magnetic resonance imaging ,medicine.disease ,Magnetic Resonance Imaging ,Computer Science Applications ,Hierarchical clustering ,business ,computer ,Software - Abstract
Modeling of hemodynamics and artificial intelligence have great potential to support clinical diagnosis and decision making. While hemodynamics modeling is extremely time- and resource-consuming, machine learning (ML) typically requires large training data that are often unavailable. The aim of this study was to develop and evaluate a novel methodology generating a large database of synthetic cases with characteristics similar to clinical cohorts of patients with coarctation of the aorta (CoA), a congenital heart disease associated with abnormal hemodynamics. Synthetic data allows use of ML approaches to investigate aortic morphometric pathology and its influence on hemodynamics. Magnetic resonance imaging data (154 patients as well as of healthy subjects) of aortic shape and flow were used to statistically characterize the clinical cohort. The methodology generating the synthetic cohort combined statistical shape modeling of aortic morphometry and aorta inlet flow fields and numerical flow simulations. Hierarchical clustering and non-linear regression analysis were successfully used to investigate the relationship between morphometry and hemodynamics and to demonstrate credibility of the synthetic cohort by comparison with a clinical cohort. A database of 2652 synthetic cases with realistic shape and hemodynamic properties was generated. Three shape clusters and respective differences in hemodynamics were identified. The novel model predicts the CoA pressure gradient with a root mean square error of 4.6 mmHg. In conclusion, synthetic data for anatomy and hemodynamics is a suitable means to address the lack of large datasets and provide a powerful basis for ML to gain new insights into cardiovascular diseases.
- Published
- 2021
- Full Text
- View/download PDF
12. Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the LUNA16 challenge.
- Author
-
Arnaud Arindra Adiyoso Setio, Alberto Traverso, Thomas de Bel, Moira S. N. Berens, Cas van den Bogaard, Piergiorgio Cerello, Hao Chen 0011, Qi Dou 0001, Maria Evelina Fantacci, Bram Geurts, Robbert van der Gugten, Pheng-Ann Heng, Bart Jansen 0001, Michael M. J. de Kaste, Valentin Kotov, Jack Yu-Hung Lin, Jeroen T. M. C. Manders, Alexander Sóñora-Mengana, Juan Carlos García-Naranjo, Mathias Prokop, Marco Saletta, Cornelia Schaefer-Prokop, Ernst Th. Scholten, Luuk Scholten, Miranda M. Snoeren, Ernesto Lopez Torres, Jef Vandemeulebroucke, Nicole Walasek, Guido C. A. Zuidhof, Bram van Ginneken, and Colin Jacobs
- Published
- 2016
13. Towards automatic pulmonary nodule management in lung cancer screening with deep learning.
- Author
-
Francesco Ciompi, Kaman Chung, Sarah J. van Riel, Arnaud Arindra Adiyoso Setio, Paul K. Gerke, Colin Jacobs, Ernst Th. Scholten, Cornelia Schaefer-Prokop, Mathilde M. W. Wille, Alfonso Marchiano, Ugo Pastorino, Mathias Prokop, and Bram van Ginneken
- Published
- 2016
14. Organ detection in thorax abdomen CT using multi-label convolutional neural networks.
- Author
-
Gabriel Efrain Humpire Mamani, Arnaud Arindra Adiyoso Setio, Bram van Ginneken, and Colin Jacobs
- Published
- 2017
- Full Text
- View/download PDF
15. Deep Learning for Malignancy Risk Estimation of Pulmonary Nodules Detected at Low-Dose Screening CT
- Author
-
Anton Schreuder, Arnaud Arindra Adiyoso Setio, Mathilde M. W. Wille, Colin Jacobs, Zaigham Saghir, Kaman Chung, Ernst T. Scholten, Bram van Ginneken, Mathias Prokop, and Kiran Vaidhya Venkadesh
- Subjects
medicine.medical_specialty ,Lung Neoplasms ,Vascular damage Radboud Institute for Health Sciences [Radboudumc 16] ,MEDLINE ,Radiation Dosage ,Malignancy ,Risk Assessment ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,All institutes and research themes of the Radboud University Medical Center ,Deep Learning ,0302 clinical medicine ,Text mining ,Discriminative model ,Humans ,Mass Screening ,Medicine ,Radiology, Nuclear Medicine and imaging ,Retrospective Studies ,Estimation ,business.industry ,Deep learning ,Low dose ,Solitary Pulmonary Nodule ,medicine.disease ,030220 oncology & carcinogenesis ,Multiple Pulmonary Nodules ,Radiology ,Artificial intelligence ,Tomography, X-Ray Computed ,business ,Rare cancers Radboud Institute for Health Sciences [Radboudumc 9] - Abstract
Background: Accurate estimation of the malignancy risk of pulmonary nodules at chest CT is crucial for optimizing management in lung cancer screening. Purpose: To develop and validate a deep learning (DL) algorithm for malignancy risk estimation of pulmonary nodules detected at screening CT. Materials and Methods: In this retrospective study, the DL algorithm was developed with 16 077 nodules (1249 malignant) collected between 2002 and 2004 from the National Lung Screening Trial. External validation was performed in the following three cohorts collected between 2004 and 2010 from the Danish Lung Cancer Screening Trial: a full cohort containing all 883 nodules (65 malignant) and two cancer-enriched cohorts with size matching (175 nodules, 59 malignant) and without size matching (177 nodules, 59 malignant) of benign nodules selected at random. Algorithm performance was measured by using the area under the receiver operating characteristic curve (AUC) and compared with that of the Pan-Canadian Early Detection of Lung Cancer (PanCan) model in the full cohort and a group of 11 clinicians composed of four thoracic radiologists, five radiology residents, and two pulmonologists in the cancer-enriched cohorts. Results: The DL algorithm significantly outperformed the PanCan model in the full cohort (AUC, 0.93 [95% CI: 0.89, 0.96] vs 0.90 [95% CI: 0.86, 0.93]; P = .046). The algorithm performed comparably to thoracic radiologists in cancer-enriched cohorts with both random benign nodules (AUC, 0.96 [95% CI: 0.93, 0.99] vs 0.90 [95% CI: 0.81, 0.98]; P = .11) and size-matched benign nodules (AUC, 0.86 [95% CI: 0.80, 0.91] vs 0.82 [95% CI: 0.74, 0.89]; P = .26). Conclusion: The deep learning algorithm showed excellent performance, comparable to thoracic radiologists, for malignancy risk estimation of pulmonary nodules detected at screening CT. This algorithm has the potential to provide reliable and reproducible malignancy risk scores for clinicians, which may help optimize management in lung cancer screening.
- Published
- 2021
- Full Text
- View/download PDF
16. Deep Learning for Lung Cancer Detection on Screening CT Scans: Results of a Large-Scale Public Competition and an Observer Study with 11 Radiologists
- Author
-
Bram Geurts, Mario Silva, Paul F. Pinsky, Erik Ranschaert, Monique Brink, Bram van Ginneken, Colin Jacobs, Stephen Lam, Pim A. de Jong, Haimasree Bhattacharya, Steven Schalekamp, Keyvan Farahani, Kaman Chung, Paul K. Gerke, Arnaud Arindra Adiyoso Setio, Joke Meersschaert, Anand Devaraj, Firdaus A. A. Mohamed Hoesein, Ernst T. Scholten, and Publica
- Subjects
medicine.medical_specialty ,Radiological and Ultrasound Technology ,Observer (quantum physics) ,Scale (ratio) ,business.industry ,Deep learning ,Vascular damage Radboud Institute for Health Sciences [Radboudumc 16] ,medicine.disease ,Competition (economics) ,Artificial Intelligence ,Urological cancers Radboud Institute for Health Sciences [Radboudumc 15] ,medicine ,Medicine and Health Sciences ,Radiology, Nuclear Medicine and imaging ,Medical physics ,Artificial intelligence ,business ,Lung cancer ,Original Research ,Rare cancers Radboud Institute for Health Sciences [Radboudumc 9] - Abstract
An observer study showed that two of the three top-performing algorithms from a public competition (Kaggle Data Science Bowl 2017) attained performances that were not significantly worse than that of 11 radiologists for estimating lung cancer risk on low-dose CT scans. Purpose. To determine whether deep learning algorithms developed in a public competition could identify lung cancer on low-dose CT scans with a performance similar to that of radiologists. Materials and Methods. In this retrospective study, a dataset consisting of 300 patient scans was used for model assessment; 150 patient scans were from the competition set and 150 were from an independent dataset. Both test datasets contained 50 cancer-positive scans and 100 cancer-negative scans. The reference standard was set by histopathologic examination for cancer-positive scans and imaging follow-up for at least 2 years for cancer-negative scans. The test datasets were applied to the three top-performing algorithms from the Kaggle Data Science Bowl 2017 public competition: grt123, Julian de Wit and Daniel Hammack (JWDH), and Aidence. Model outputs were compared with an observer study of 11 radiologists that assessed the same test datasets. Each scan was scored on a continuous scale by both the deep learning algorithms and the radiologists. Performance was measured using multireader, multicase receiver operating characteristic analysis. Results. The area under the receiver operating characteristic curve (AUC) was 0.877 (95% CI: 0.842, 0.910) for grt123, 0.902 (95% CI: 0.871, 0.932) for JWDH, and 0.900 (95% CI: 0.870, 0.928) for Aidence. The average AUC of the radiologists was 0.917 (95% CI: 0.889, 0.945), which was significantly higher than grt123 (P = .02); however, no significant difference was found between the radiologists and JWDH (P = .29) or Aidence (P = .26). Conclusion. Deep learning algorithms developed in a public competition for lung cancer detection in low-dose CT scans reached performance close to that of radiologists.
- Published
- 2021
17. No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks
- Author
-
Arnaud Arindra Adiyoso Setio, Dorin Comaniciu, Siqi Liu, Sasa Grbic, Eli Gibson, Bogdan Georgescu, and Florin C. Ghesu
- Subjects
FOS: Computer and information sciences ,Nodule detection ,Computer Science - Machine Learning ,Lung Neoplasms ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Image processing ,Computed tomography ,030218 nuclear medicine & medical imaging ,Machine Learning (cs.LG) ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Medical imaging ,Image Processing, Computer-Assisted ,FOS: Electrical engineering, electronic engineering, information engineering ,Humans ,Electrical and Electronic Engineering ,Lung cancer ,Lung ,Early Detection of Cancer ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,business.industry ,Image and Video Processing (eess.IV) ,Cancer ,Solitary Pulmonary Nodule ,Pattern recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,medicine.disease ,Computer Science Applications ,ComputingMethodologies_PATTERNRECOGNITION ,Radiographic Image Interpretation, Computer-Assisted ,Artificial intelligence ,business ,Tomography, X-Ray Computed ,Software ,Lung cancer screening - Abstract
Detecting malignant pulmonary nodules at an early stage can allow medical interventions which may increase the survival rate of lung cancer patients. Using computer vision techniques to detect nodules can improve the sensitivity and the speed of interpreting chest CT for lung cancer screening. Many studies have used CNNs to detect nodule candidates. Though such approaches have been shown to outperform the conventional image processing based methods regarding the detection accuracy, CNNs are also known to be limited to generalize on under-represented samples in the training set and prone to imperceptible noise perturbations. Such limitations can not be easily addressed by scaling up the dataset or the models. In this work, we propose to add adversarial synthetic nodules and adversarial attack samples to the training data to improve the generalization and the robustness of the lung nodule detection systems. To generate hard examples of nodules from a differentiable nodule synthesizer, we use projected gradient descent (PGD) to search the latent code within a bounded neighbourhood that would generate nodules to decrease the detector response. To make the network more robust to unanticipated noise perturbations, we use PGD to search for noise patterns that can trigger the network to give over-confident mistakes. By evaluating on two different benchmark datasets containing consensus annotations from three radiologists, we show that the proposed techniques can improve the detection performance on real CT data. To understand the limitations of both the conventional networks and the proposed augmented networks, we also perform stress-tests on the false positive reduction networks by feeding different types of artificially produced patches. We show that the augmented networks are more robust to both under-represented nodules as well as resistant to noise perturbations., Comment: Published on IEEE Trans. on Medical Imaging
- Published
- 2020
- Full Text
- View/download PDF
18. Extracting and Leveraging Nodule Features with Lung Inpainting for Local Feature Augmentation
- Author
-
Arnaud Arindra Adiyoso Setio, Sasa Grbic, Sebastian Gündel, Andreas Maier, and Dorin Comaniciu
- Subjects
Feature (computer vision) ,Computer science ,business.industry ,medicine ,Inpainting ,Subtraction ,Nodule (medicine) ,Pattern recognition ,Healthy tissue ,Artificial intelligence ,medicine.symptom ,business - Abstract
Chest X-ray (CXR) is the most common examination for fast detection of pulmonary abnormalities. Recently, automated algorithms have been developed to classify multiple diseases and abnormalities in CXR scans. However, because of the limited availability of scans containing nodules and the subtle properties of nodules in CXRs, state-of-the-art methods do not perform well on nodule classification. To create additional data for the training process, standard augmentation techniques are applied. However, the variance introduced by these methods are limited as the images are typically modified globally. In this paper, we propose a method for local feature augmentation by extracting local nodule features using a generative inpainting network. The network is applied to generate realistic, healthy tissue and structures in patches containing nodules. The nodules are entirely removed in the inpainted representation. The extraction of the nodule features is processed by subtraction of the inpainted patch from the nodule patch. With arbitrary displacement of the extracted nodules in the lung area across different CXR scans and further local modifications during training, we significantly increase the nodule classification performance and outperform state-of-the-art augmentation methods.
- Published
- 2020
- Full Text
- View/download PDF
19. Robust classification from noisy labels: Integrating additional knowledge for chest radiography abnormality assessment
- Author
-
Dorin Comaniciu, Arnaud Arindra Adiyoso Setio, Andreas Maier, Sasa Grbic, Florin C. Ghesu, Bogdan Georgescu, and Sebastian Gündel
- Subjects
Lung Diseases ,Computer science ,Radiography ,Multi-task learning ,Health Informatics ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Robustness (computer science) ,Humans ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Lung ,Training set ,Radiological and Ultrasound Technology ,business.industry ,Pattern recognition ,Computer Graphics and Computer-Aided Design ,Computer Vision and Pattern Recognition ,Noise (video) ,Artificial intelligence ,Abnormality ,business ,030217 neurology & neurosurgery ,Natural language - Abstract
Chest radiography is the most common radiographic examination performed in daily clinical practice for the detection of various heart and lung abnormalities. The large amount of data to be read and reported, with more than 100 studies per day for a single radiologist, poses a challenge in consistently maintaining high interpretation accuracy. The introduction of large-scale public datasets has led to a series of novel systems for automated abnormality classification. However, the labels of these datasets were obtained using natural language processed medical reports, yielding a large degree of label noise that can impact the performance. In this study, we propose novel training strategies that handle label noise from such suboptimal data. Prior label probabilities were measured on a subset of training data re-read by 4 board-certified radiologists and were used during training to increase the robustness of the training model to the label noise. Furthermore, we exploit the high comorbidity of abnormalities observed in chest radiography and incorporate this information to further reduce the impact of label noise. Additionally, anatomical knowledge is incorporated by training the system to predict lung and heart segmentation, as well as spatial knowledge labels. To deal with multiple datasets and images derived from various scanners that apply different post-processing techniques, we introduce a novel image normalization strategy. Experiments were performed on an extensive collection of 297,541 chest radiographs from 86,876 patients, leading to a state-of-the-art performance level for 17 abnormalities from 2 datasets. With an average AUC score of 0.880 across all abnormalities, our proposed training strategies can be used to significantly improve performance scores.
- Published
- 2021
- Full Text
- View/download PDF
20. Automatic detection of large pulmonary solid nodules in thoracic CT images
- Author
-
Colin Jacobs, Jaap Gelderblom, Bram van Ginneken, and Arnaud Arindra Adiyoso Setio
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Computer science ,Radiography ,Cancer ,Image processing ,Computed tomography ,General Medicine ,Image segmentation ,medicine.disease ,Thresholding ,Computer-aided diagnosis ,Parenchyma ,medicine ,Medical imaging ,Thoracic ct ,Tomography ,Radiology ,business - Abstract
Purpose: Current computer-aided detection (CAD) systems for pulmonary nodules in computed tomography (CT) scans have a good performance for relatively small nodules, but often fail to detect the much rarer larger nodules, which are more likely to be cancerous. We present a novel CAD system specifically designed to detect solid nodules larger than 10 mm. Methods: The proposed detection pipeline is initiated by a three-dimensional lung segmentation algorithm optimized to include large nodules attached to the pleural wall via morphological processing. An additional preprocessing is used to mask out structures outside the pleural space to ensure that pleural and parenchymal nodules have a similar appearance. Next, nodule candidates are obtained via a multistage process of thresholding and morphological operations, to detect both larger and smaller candidates. After segmenting each candidate, a set of 24 features based on intensity, shape, blobness, and spatial context are computed. A radial basis support vector machine (SVM) classifier was used to classify nodule candidates, and performance was evaluated using ten-fold cross-validation on the full publicly available lung image database consortium database. Results: The proposed CAD system reaches a sensitivity of 98.3% (234/238) and 94.1% (224/238) large nodules at an average of 4.0 and 1.0 false positives/scan, respectively. Conclusions: The authors conclude that the proposed dedicated CAD system for large pulmonary nodules can identify the vast majority of highly suspicious lesions in thoracic CT scans with a small number of false positives.
- Published
- 2015
- Full Text
- View/download PDF
21. Class-Aware Adversarial Lung Nodule Synthesis in CT Images
- Author
-
Siqi Liu, Guillaume Chabin, Andrew F. Laine, Bogdan Georgescu, Eli Gibson, Arnaud Arindra Adiyoso Setio, Sasa Grbic, Dorin Comaniciu, Jie Yang, and Zhoubing Xu
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Context (language use) ,Malignancy ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Medical imaging ,business.industry ,Deep learning ,Supervised learning ,Pattern recognition ,Nodule (medicine) ,Real image ,medicine.disease ,Class (biology) ,ComputingMethodologies_PATTERNRECOGNITION ,Binary classification ,030220 oncology & carcinogenesis ,Artificial intelligence ,medicine.symptom ,business - Abstract
Though large-scale datasets are essential for training deep learning systems, it is expensive to scale up the collection of medical imaging datasets. Synthesizing the objects of interests, such as lung nodules, in medical images based on the distribution of annotated datasets can be helpful for improving the supervised learning tasks, especially when the datasets are limited by size and class balance. In this paper, we propose the class-aware adversarial synthesis framework to synthesize lung nodules in CT images. The framework is built with a coarse-to-fine patch in-painter (generator) and two class-aware discriminators. By conditioning on the random latent variables and the target nodule labels, the trained networks are able to generate diverse nodules given the same context. By evaluating on the public LIDC-IDRI dataset, we demonstrate an example application of the proposed framework for improving the accuracy of the lung nodule malignancy estimation as a binary classification problem, which is important in the lung screening scenario. We show that combining the real image patches and the synthetic lung nodules in the training set can improve the mean AUC classification score across different network architectures by 2%.
- Published
- 2018
- Full Text
- View/download PDF
22. Efficient organ localization using multi-label convolutional neural networks in thorax-abdomen CT scans
- Author
-
Bram van Ginneken, Arnaud Arindra Adiyoso Setio, Colin Jacobs, Gabriel Humpire-Mamani, and Publica
- Subjects
Radiological and Ultrasound Technology ,Artificial neural network ,Computer science ,business.industry ,Pattern recognition ,01 natural sciences ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,010309 optics ,03 medical and health sciences ,0302 clinical medicine ,Minimum bounding box ,Test set ,Abdomen ,0103 physical sciences ,Image Processing, Computer-Assisted ,Humans ,Radiography, Thoracic ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Neural Networks, Computer ,Artificial intelligence ,Tomography, X-Ray Computed ,business ,Rare cancers Radboud Institute for Health Sciences [Radboudumc 9] - Abstract
Contains fulltext : 191302.pdf (Publisher’s version ) (Open Access) Automatic localization of organs and other structures in medical images is an important preprocessing step that can improve and speed up other algorithms such as organ segmentation, lesion detection, and registration. This work presents an efficient method for simultaneous localization of multiple structures in 3D thorax-abdomen CT scans. Our approach predicts the location of multiple structures using a single multi-label convolutional neural network for each orthogonal view. Each network takes extra slices around the current slice as input to provide extra context. A sigmoid layer is used to perform multi-label classification. The output of the three networks is subsequently combined to compute a 3D bounding box for each structure. We used our approach to locate 11 structures of interest. The neural network was trained and evaluated on a large set of 1884 thorax-abdomen CT scans from patients undergoing oncological workup. Reference bounding boxes were annotated by human observers. The performance of our method was evaluated by computing the wall distance to the reference bounding boxes. The bounding boxes annotated by the first human observer were used as the reference standard for the test set. Using the best configuration, we obtained an average wall distance of 3.20+/-7.33mm in the test set. The second human observer achieved 1.23+/-3.39mm. For all structures, the results were better than those reported in previously published studies. In conclusion, we proposed an efficient method for the accurate localization of multiple organs. Our method uses multiple slices as input to provide more context around the slice under analysis, and we have shown that this improves performance. This method can easily be adapted to handle more organs.
- Published
- 2018
23. Correction: Corrigendum: Towards automatic pulmonary nodule management in lung cancer screening with deep learning
- Author
-
Francesco Ciompi, Alfonso Marchianò, Mathias Prokop, Colin Jacobs, Kaman Chung, S. J. van Riel, B. van Ginneken, Cornelia M. Schaefer-Prokop, Paul K. Gerke, Mathilde M. W. Wille, Ugo Pastorino, Ernst Th. Scholten, and Arnaud Arindra Adiyoso Setio
- Subjects
medicine.medical_specialty ,Multidisciplinary ,business.industry ,Deep learning ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,ComputingMethodologies_PATTERNRECOGNITION ,0302 clinical medicine ,Pulmonary nodule ,Medicine ,Radiology ,Artificial intelligence ,business ,030217 neurology & neurosurgery ,Lung cancer screening - Abstract
The introduction of lung cancer screening programs will produce an unprecedented amount of chest CT scans in the near future, which radiologists will have to read in order to decide on a patient follow-up strategy. According to the current guidelines, the workup of screen-detected nodules strongly relies on nodule size and nodule type. In this paper, we present a deep learning system based on multi-stream multi-scale convolutional networks, which automatically classifies all nodule types relevant for nodule workup. The system processes raw CT data containing a nodule without the need for any additional information such as nodule segmentation or nodule size and learns a representation of 3D data by analyzing an arbitrary number of 2D views of a given nodule. The deep learning system was trained with data from the Italian MILD screening trial and validated on an independent set of data from the Danish DLCST screening trial. We analyze the advantage of processing nodules at multiple scales with a multi-stream convolutional network architecture, and we show that the proposed deep learning system achieves performance at classifying nodule type that surpasses the one of classical machine learning approaches and is within the inter-observer variability among four experienced human observers.
- Published
- 2017
- Full Text
- View/download PDF
24. A Survey on Deep Learning in Medical Image Analysis
- Author
-
Francesco Ciompi, Arnaud Arindra Adiyoso Setio, Clara I. Sánchez, Babak Ehteshami Bejnordi, Bram van Ginneken, Thijs Kooi, Jeroen van der Laak, Geert Litjens, and Mohsen Ghafoorian
- Subjects
FOS: Computer and information sciences ,Diagnostic Imaging ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition ,Health Informatics ,02 engineering and technology ,Field (computer science) ,Sensory disorders Donders Center for Medical Neuroscience [Radboudumc 12] ,030218 nuclear medicine & medical imaging ,Critical discussion ,Image (mathematics) ,Machine Learning ,03 medical and health sciences ,Tumours of the digestive tract Radboud Institute for Health Sciences [Radboudumc 14] ,0302 clinical medicine ,All institutes and research themes of the Radboud University Medical Center ,0202 electrical engineering, electronic engineering, information engineering ,Image Processing, Computer-Assisted ,Humans ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Radiological and Ultrasound Technology ,Contextual image classification ,business.industry ,Deep learning ,Digital pathology ,Computer Graphics and Computer-Aided Design ,Data science ,Object detection ,Women's cancers Radboud Institute for Health Sciences [Radboudumc 17] ,Urological cancers Radboud Institute for Health Sciences [Radboudumc 15] ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Neural Networks, Computer ,business ,Algorithms ,Rare cancers Radboud Institute for Health Sciences [Radboudumc 9] - Abstract
Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed., Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 2017
- Published
- 2017
25. Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge
- Author
-
Alberto Traverso, Jack Yu-Hung Lin, Moira S.N. Berens, Cornelia M. Schaefer-Prokop, Maria Evelina Fantacci, Mathias Prokop, Hao Chen, Michael M.J. de Kaste, Ernesto Lopez Torres, Jef Vandemeulebroucke, Pheng-Ann Heng, Evgenia Papavasileiou, Luuk Scholten, Bram van Ginneken, Cas van den Bogaard, Robbert van der Gugten, Juan C. García-Naranjo, Colin Jacobs, Arnaud Arindra Adiyoso Setio, Jeroen Manders, Guido Zuidhof, M. Saletta, Miranda M. Snoeren, Valentin Kotov, Alexander Sóñora-Mengana, Ernst T. Scholten, Qi Dou, Piergiorgio Cerello, Thomas de Bel, Bart Jansen, Bram Geurts, Nicole Walasek, Publica, Electronics and Informatics, Faculty of Sciences and Bioengineering Sciences, and Faculty of Engineering
- Subjects
FOS: Computer and information sciences ,Lung Neoplasms ,Databases, Factual ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Vascular damage Radboud Institute for Health Sciences [Radboudumc 16] ,Computer Science - Computer Vision and Pattern Recognition ,02 engineering and technology ,Social Development ,030218 nuclear medicine & medical imaging ,Reduction (complexity) ,Upload ,0302 clinical medicine ,Nuclear Medicine and Imaging ,0202 electrical engineering, electronic engineering, information engineering ,False positive paradox ,Computer vision ,Computed tomography ,Radiological and Ultrasound Technology ,Cognitive artificial intelligence ,Computer Graphics and Computer-Aided Design ,Radiology Nuclear Medicine and imaging ,Radiographic Image Interpretation, Computer-Assisted ,020201 artificial intelligence & image processing ,Computer Vision and Pattern Recognition ,medicine.symptom ,Radiology ,Algorithm ,Algorithms ,Rare cancers Radboud Institute for Health Sciences [Radboudumc 9] ,Pulmonary nodules ,Convolutional networks ,Health Informatics ,Set (abstract data type) ,03 medical and health sciences ,Imaging, Three-Dimensional ,All institutes and research themes of the Radboud University Medical Center ,Medical imaging ,medicine ,Computer-aided detection ,Deep learning ,Medical image challenges ,Radiology, Nuclear Medicine and Imaging ,1707 ,Humans ,Radiology, Nuclear Medicine and imaging ,business.industry ,Solitary Pulmonary Nodule ,Nodule (medicine) ,Data set ,Brain Networks and Neuronal Communication [DI-BCB_DCC_Theme 4] ,Artificial intelligence ,Tomography, X-Ray Computed ,business - Abstract
Contains fulltext : 179531.pdf (Publisher’s version ) (Open Access) Automatic detection of pulmonary nodules in thoracic computed tomography (CT) scans has been an active area of research for the last two decades. However, there have only been few studies that provide a comparative performance evaluation of different systems on a common database. We have therefore set up the LUNA16 challenge, an objective evaluation framework for automatic nodule detection algorithms using the largest publicly available reference database of chest CT scans, the LIDC-IDRI data set. In LUNA16, participants develop their algorithm and upload their predictions on 888 CT scans in one of the two tracks: 1) the complete nodule detection track where a complete CAD system should be developed, or 2) the false positive reduction track where a provided set of nodule candidates should be classified. This paper describes the setup of LUNA16 and presents the results of the challenge so far. Moreover, the impact of combining individual systems on the detection performance was also investigated. It was observed that the leading solutions employed convolutional networks and used the provided set of nodule candidates. The combination of these solutions achieved an excellent sensitivity of over 95% at fewer than 1.0 false positives per scan. This highlights the potential of combining algorithms to improve the detection performance. Our observer study with four expert readers has shown that the best system detects nodules that were missed by expert readers who originally annotated the LIDC-IDRI data. We released this set of additional nodules for further development of CAD systems. 13 p.
- Published
- 2017
26. Improving airway segmentation in computed tomography using leak detection with convolutional networks
- Author
-
Bram van Ginneken, Jean-Paul Charbonnier, Arnaud Arindra Adiyoso Setio, Eva M. van Rikxoort, Francesco Ciompi, and Cornelia M. Schaefer-Prokop
- Subjects
Airway tree ,Computer science ,Speech recognition ,Respiratory System ,Health Informatics ,Computed tomography ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Tumours of the digestive tract Radboud Institute for Health Sciences [Radboudumc 14] ,0302 clinical medicine ,medicine ,Humans ,Thoracic ct ,Radiology, Nuclear Medicine and imaging ,Leak detection ,Airway segmentation ,Sensitivity (control systems) ,Radiological and Ultrasound Technology ,medicine.diagnostic_test ,business.industry ,Reproducibility of Results ,Pattern recognition ,Thorax ,Computer Graphics and Computer-Aided Design ,Tree (data structure) ,Independent set ,Inflammatory diseases Radboud Institute for Health Sciences [Radboudumc 5] ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Tomography, X-Ray Computed ,business ,Algorithms ,030217 neurology & neurosurgery ,Rare cancers Radboud Institute for Health Sciences [Radboudumc 9] - Abstract
Contains fulltext : 173114.pdf (Publisher’s version ) (Open Access) We propose a novel method to improve airway segmentation in thoracic computed tomography (CT) by detecting and removing leaks. Leak detection is formulated as a classification problem, in which a convolutional network (ConvNet) is trained in a supervised fashion to perform the classification task. In order to increase the segmented airway tree length, we take advantage of the fact that multiple segmentations can be extracted from a given airway segmentation algorithm by varying the parameters that influence the tree length and the amount of leaks. We propose a strategy in which the combination of these segmentations after removing leaks can increase the airway tree length while limiting the amount of leaks. This strategy therefore largely circumvents the need for parameter fine-tuning of a given airway segmentation algorithm. The ConvNet was trained and evaluated using a subset of inspiratory thoracic CT scans taken from the COPDGene study. Our method was validated on a separate independent set of the EXACT'09 challenge. We show that our method significantly improves the quality of a given leaky airway segmentation, achieving a higher sensitivity at a low false-positive rate compared to all the state-of-the-art methods that entered in EXACT09, and approaching the performance of the combination of all of them.
- Published
- 2017
27. Using deep learning to segment breast and fibroglandular tissue in MRI volumes
- Author
-
Arnaud Arindra Adiyoso Setio, Mehmet Ufuk Dalmış, Albert Gubern-Mérida, Katharina Holland, Geert Litjens, Nico Karssemeijer, and Ritse M. Mann
- Subjects
Adult ,Pathology ,medicine.medical_specialty ,Similarity (geometry) ,Computer science ,Datasets as Topic ,030218 nuclear medicine & medical imaging ,Pattern Recognition, Automated ,Machine Learning ,03 medical and health sciences ,0302 clinical medicine ,Atlases as Topic ,Atlas (anatomy) ,medicine ,Image Processing, Computer-Assisted ,Breast MRI ,Humans ,Segmentation ,skin and connective tissue diseases ,Aged ,Breast Density ,Skin ,medicine.diagnostic_test ,business.industry ,Template matching ,Deep learning ,Pattern recognition ,General Medicine ,Fibroglandular Tissue ,Middle Aged ,Magnetic Resonance Imaging ,Women's cancers Radboud Institute for Health Sciences [Radboudumc 17] ,medicine.anatomical_structure ,030220 oncology & carcinogenesis ,Urological cancers Radboud Institute for Health Sciences [Radboudumc 15] ,Female ,Artificial intelligence ,business ,Artifacts ,Mammography ,Rare cancers Radboud Institute for Health Sciences [Radboudumc 9] - Abstract
Contains fulltext : 173062.pdf (Publisher’s version ) (Open Access) PURPOSE: Automated segmentation of breast and fibroglandular tissue (FGT) is required for various computer-aided applications of breast MRI. Traditional image analysis and computer vision techniques, such atlas, template matching, or, edge and surface detection, have been applied to solve this task. However, applicability of these methods is usually limited by the characteristics of the images used in the study datasets, while breast MRI varies with respect to the different MRI protocols used, in addition to the variability in breast shapes. All this variability, in addition to various MRI artifacts, makes it a challenging task to develop a robust breast and FGT segmentation method using traditional approaches. Therefore, in this study, we investigated the use of a deep-learning approach known as "U-net." MATERIALS AND METHODS: We used a dataset of 66 breast MRI's randomly selected from our scientific archive, which includes five different MRI acquisition protocols and breasts from four breast density categories in a balanced distribution. To prepare reference segmentations, we manually segmented breast and FGT for all images using an in-house developed workstation. We experimented with the application of U-net in two different ways for breast and FGT segmentation. In the first method, following the same pipeline used in traditional approaches, we trained two consecutive (2C) U-nets: first for segmenting the breast in the whole MRI volume and the second for segmenting FGT inside the segmented breast. In the second method, we used a single 3-class (3C) U-net, which performs both tasks simultaneously by segmenting the volume into three regions: nonbreast, fat inside the breast, and FGT inside the breast. For comparison, we applied two existing and published methods to our dataset: an atlas-based method and a sheetness-based method. We used Dice Similarity Coefficient (DSC) to measure the performances of the automated methods, with respect to the manual segmentations. Additionally, we computed Pearson's correlation between the breast density values computed based on manual and automated segmentations. RESULTS: The average DSC values for breast segmentation were 0.933, 0.944, 0.863, and 0.848 obtained from 3C U-net, 2C U-nets, atlas-based method, and sheetness-based method, respectively. The average DSC values for FGT segmentation obtained from 3C U-net, 2C U-nets, and atlas-based methods were 0.850, 0.811, and 0.671, respectively. The correlation between breast density values based on 3C U-net and manual segmentations was 0.974. This value was significantly higher than 0.957 as obtained from 2C U-nets (P < 0.0001, Steiger's Z-test with Bonferoni correction) and 0.938 as obtained from atlas-based method (P = 0.0016). CONCLUSIONS: In conclusion, we applied a deep-learning method, U-net, for segmenting breast and FGT in MRI in a dataset that includes a variety of MRI protocols and breast densities. Our results showed that U-net-based methods significantly outperformed the existing algorithms and resulted in significantly more accurate breast density computation.
- Published
- 2017
28. Towards automatic pulmonary nodule management in lung cancer screening with deep learning
- Author
-
Ugo Pastorino, Paul K. Gerke, Cornelia Schaefer-Prokop, Mathilde M. W. Wille, Arnaud Arindra Adiyoso Setio, Alfonso Marchianò, Mathias Prokop, Bram van Ginneken, Colin Jacobs, Kaman Chung, Sarah J. van Riel, Francesco Ciompi, and Ernst T. Scholten
- Subjects
FOS: Computer and information sciences ,Lung Neoplasms ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Vascular damage Radboud Institute for Health Sciences [Radboudumc 16] ,Chest ct ,Computer Science - Computer Vision and Pattern Recognition ,030218 nuclear medicine & medical imaging ,Tumours of the digestive tract Radboud Institute for Health Sciences [Radboudumc 14] ,03 medical and health sciences ,0302 clinical medicine ,Deep Learning ,Pulmonary nodule ,medicine ,Humans ,Segmentation ,Early Detection of Cancer ,Multidisciplinary ,business.industry ,Screening Trial ,Deep learning ,Solitary Pulmonary Nodule ,Nodule (medicine) ,Pattern recognition ,Corrigenda ,3. Good health ,ComputingMethodologies_PATTERNRECOGNITION ,030220 oncology & carcinogenesis ,Artificial intelligence ,medicine.symptom ,business ,Tomography, X-Ray Computed ,Lung cancer screening ,Rare cancers Radboud Institute for Health Sciences [Radboudumc 9] - Abstract
The introduction of lung cancer screening programs will produce an unprecedented amount of chest CT scans in the near future, which radiologists will have to read in order to decide on a patient follow-up strategy. According to the current guidelines, the workup of screen-detected nodules strongly relies on nodule size and nodule type. In this paper, we present a deep learning system based on multi-stream multi-scale convolutional networks, which automatically classifies all nodule types relevant for nodule workup. The system processes raw CT data containing a nodule without the need for any additional information such as nodule segmentation or nodule size and learns a representation of 3D data by analyzing an arbitrary number of 2D views of a given nodule. The deep learning system was trained with data from the Italian MILD screening trial and validated on an independent set of data from the Danish DLCST screening trial. We analyze the advantage of processing nodules at multiple scales with a multi-stream convolutional network architecture, and we show that the proposed deep learning system achieves performance at classifying nodule type that surpasses the one of classical machine learning approaches and is within the inter-observer variability among four experienced human observers., Published on Scientific Reports
- Published
- 2017
29. Pulmonary Nodule Detection in CT Images: False Positive Reduction Using Multi-View Convolutional Networks
- Author
-
Matiullah Naqibullah, Francesco Ciompi, Geert Litjens, Arnaud Arindra Adiyoso Setio, Paul K. Gerke, Colin Jacobs, Mathilde M. W. Wille, Sarah J. van Riel, Clara I. Sánchez, Bram van Ginneken, and Publica
- Subjects
Lung Neoplasms ,Computer science ,Feature extraction ,02 engineering and technology ,Overfitting ,030218 nuclear medicine & medical imaging ,Pattern Recognition, Automated ,Reduction (complexity) ,Machine Learning ,Tumours of the digestive tract Radboud Institute for Health Sciences [Radboudumc 14] ,03 medical and health sciences ,0302 clinical medicine ,Discriminative model ,0202 electrical engineering, electronic engineering, information engineering ,False positive paradox ,Humans ,Computer vision ,Electrical and Electronic Engineering ,Dropout (neural networks) ,Radiological and Ultrasound Technology ,business.industry ,Solitary Pulmonary Nodule ,Computer Science Applications ,Pattern recognition (psychology) ,Radiographic Image Interpretation, Computer-Assisted ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Tomography, X-Ray Computed ,Software ,Algorithms ,Rare cancers Radboud Institute for Health Sciences [Radboudumc 9] - Abstract
Contains fulltext : 164462.pdf (Publisher’s version ) (Open Access) We propose a novel Computer-Aided Detection (CAD) system for pulmonary nodules using multi-view convolutional networks (ConvNets), for which discriminative features are automatically learnt from the training data. The network is fed with nodule candidates obtained by combining three candidate detectors specifically designed for solid, subsolid, and large nodules. For each candidate, a set of 2-D patches from differently oriented planes is extracted. The proposed architecture comprises multiple streams of 2-D ConvNets, for which the outputs are combined using a dedicated fusion method to get the final classification. Data augmentation and dropout are applied to avoid overfitting. On 888 scans of the publicly available LIDCIDRI dataset, our method reaches high detection sensitivities of 85.4% and 90.1% at 1 and 4 false positives per scan, respectively. An additional evaluation on independent datasets from the ANODE09 challenge and DLCST is performed. We showed that the proposed multi-view ConvNets is highly suited to be used for false positive reduction of a CAD system.
- Published
- 2016
30. Off-the-shelf convolutional neural network features for pulmonary nodule detection in computed tomography scans
- Author
-
Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Colin Jacobs, and Bram van Ginneken
- Subjects
medicine.diagnostic_test ,Computer science ,business.industry ,Feature extraction ,Cancer ,Computed tomography ,medicine.disease ,Convolutional neural network ,Object detection ,Support vector machine ,Data set ,Pulmonary nodule ,medicine ,Medical imaging ,Computer vision ,Artificial intelligence ,business - Abstract
Convolutional neural networks (CNNs) have emerged as the most powerful technique for a range of different tasks in computer vision. Recent work suggested that CNN features are generic and can be used for classification tasks outside the exact domain for which the networks were trained. In this work we use the features from one such network, OverFeat, trained for object detection in natural images, for nodule detection in computed tomography scans. We use 865 scans from the publicly available LIDC data set, read by four thoracic radiologists. Nodule candidates are generated by a state-of-the-art nodule detection system. We extract 2D sagittal, coronal and axial patches for each nodule candidate and extract 4096 features from the penultimate layer of OverFeat and classify these with linear support vector machines. We show for various configurations that the off-the-shelf CNN features perform surprisingly well, but not as good as the dedicated detection system. When both approaches are combined, significantly better results are obtained than either approach alone. We conclude that CNN features have great potential to be used for detection tasks in volumetric medical data.
- Published
- 2015
- Full Text
- View/download PDF
31. Computer-aided detection of lung cancer: combining pulmonary nodule detection systems with a tumor risk prediction model
- Author
-
Sarah J. van Riel, Mathilde M. W. Wille, Francesco Ciompi, Eva M. van Rikxoort, Asger Dirksen, Colin Jacobs, Bram van Ginneken, and Arnaud Arindra Adiyoso Setio
- Subjects
medicine.medical_specialty ,business.industry ,Cancer ,Nodule (medicine) ,CAD ,medicine.disease ,Malignancy ,Computer-aided diagnosis ,False positive paradox ,Medicine ,Radiology ,medicine.symptom ,business ,Lung cancer ,Lung cancer screening - Abstract
Computer-Aided Detection (CAD) has been shown to be a promising tool for automatic detection of pulmonary nodules from computed tomography (CT) images. However, the vast majority of detected nodules are benign and do not require any treatment. For effective implementation of lung cancer screening programs, accurate identification of malignant nodules is the key. We investigate strategies to improve the performance of a CAD system in detecting nodules with a high probability of being cancers. Two strategies were proposed: (1) combining CAD detections with a recently published lung cancer risk prediction model and (2) the combination of multiple CAD systems. First, CAD systems were used to detect the nodules. Each CAD system produces markers with a certain degree of suspicion. Next, the malignancy probability was automatically computed for each marker, given nodule characteristics measured by the CAD system. Last, CAD degree of suspicion and malignancy probability were combined using the product rule. We evaluated the method using 62 nodules which were proven to be malignant cancers, from 180 scans of the Danish Lung Cancer Screening Trial. The malignant nodules were considered as positive samples, while all other findings were considered negative. Using a product rule, the best proposed system achieved an improvement in sensitivity, compared to the best individual CAD system, from 41.9% to 72.6% at 2 false positives (FPs)/scan and from 56.5% to 88.7% at 8 FPs/scan. Our experiment shows that combining a nodule malignancy probability with multiple CAD systems can increase the performance of computerized detection of lung cancer.
- Published
- 2015
- Full Text
- View/download PDF
32. Memory-Centric Accelerator Design for Convolutional Neural Networks
- Author
-
Arnaud Arindra Adiyoso Setio, Maurice Peemen, Bart Mesman, Henk Corporaal, and Electronic Systems
- Subjects
Virtex ,Memory hierarchy ,business.industry ,Computer science ,Locality ,Cache-only memory architecture ,Memory bandwidth ,02 engineering and technology ,020202 computer hardware & architecture ,Memory management ,Embedded system ,0202 electrical engineering, electronic engineering, information engineering ,Interleaved memory ,020201 artificial intelligence & image processing ,Computing with Memory ,business ,Auxiliary memory - Abstract
In the near future, cameras will be used everywhere as flexible sensors for numerous applications. For mobility and privacy reasons, the required image processing should be local on embedded computer platforms with performance requirements and energy constraints. Dedicated acceleration of Convolutional Neural Networks (CNN) can achieve these targets with enough flexibility to perform multiple vision tasks. A challenging problem for the design of efficient accelerators is the limited amount of external memory bandwidth. We show that the effects of the memory bottleneck can be reduced by a flexible memory hierarchy that supports the complex data access patterns in CNN workload. The efficiency of the on-chip memories is maximized by our scheduler that uses tiling to optimize for data locality. Our design flow ensures that on-chip memory size is minimized, which reduces area and energy usage. The design flow is evaluated by a High Level Synthesis implementation on a Virtex 6 FPGA board. Compared to accelerators with standard scratchpad memories the FPGA resources can be reduced up to 13× while maintaining the same performance. Alternatively, when the same amount of FPGA resources is used our accelerators are up to 11× faster.
- Published
- 2013
33. Optimized 8-level turbo encoder algorithm and VLSI architecture for LTE
- Author
-
Arnaud Arindra Adiyoso Setio, Trio Adiono, and Ardimas Andi Purwita
- Subjects
Very-large-scale integration ,Channel capacity ,Turbo equalizer ,Noisy-channel coding theorem ,Computer science ,Convolutional code ,Cellular network ,Turbo code ,Data_CODINGANDINFORMATIONTHEORY ,Encoder ,Algorithm - Abstract
Turbo code is a high performance channel coding which is able to closely reach the channel capacity of Shannon limit. It plays an important role to increase the performance in one of the latest standard in the mobile network technology tree, LTE [1]. In this paper, a new architecture of Turbo code encoder based on 3GPP standard is proposed. This architecture is developed by implementing optimized 8-level parallel architecture, dual RAM in turbo code internal interleaver, recursive pair wise matching, and efficient 8-level index generator in turbo code internal interleaver. In order to ensure the functionality of the proposed algorithm and architecutre, MATLAB software are used to simulate and to profile the system. The proposed architecture successfully increases the speed of encoder 16 times faster compared to conventional architecture with size smaller than 50%.
- Published
- 2011
- Full Text
- View/download PDF
34. Efficient organ localization using multi-label convolutional neural networks in thorax-abdomen CT scans.
- Author
-
Gabriel Efrain Humpire-Mamani, Arnaud Arindra Adiyoso Setio, Bram van Ginneken, and Colin Jacobs
- Subjects
- *
BIOLOGICAL neural networks , *COMPUTED tomography - Abstract
Automatic localization of organs and other structures in medical images is an important preprocessing step that can improve and speed up other algorithms such as organ segmentation, lesion detection, and registration. This work presents an efficient method for simultaneous localization of multiple structures in 3D thorax-abdomen CT scans. Our approach predicts the location of multiple structures using a single multi-label convolutional neural network for each orthogonal view. Each network takes extra slices around the current slice as input to provide extra context. A sigmoid layer is used to perform multi-label classification. The output of the three networks is subsequently combined to compute a 3D bounding box for each structure. We used our approach to locate 11 structures of interest. The neural network was trained and evaluated on a large set of 1884 thorax-abdomen CT scans from patients undergoing oncological workup. Reference bounding boxes were annotated by human observers. The performance of our method was evaluated by computing the wall distance to the reference bounding boxes. The bounding boxes annotated by the first human observer were used as the reference standard for the test set. Using the best configuration, we obtained an average wall distance of mm in the test set. The second human observer achieved mm. For all structures, the results were better than those reported in previously published studies. In conclusion, we proposed an efficient method for the accurate localization of multiple organs. Our method uses multiple slices as input to provide more context around the slice under analysis, and we have shown that this improves performance. This method can easily be adapted to handle more organs. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.