38 results on '"Aria Pezeshk"'
Search Results
2. Evaluating neural network robustness for melanoma classification using mutual information.
- Author
-
Molly O'Brien, Julia V. Bukowski, Greg Hager, Aria Pezeshk, and Mathias Unberath
- Published
- 2022
- Full Text
- View/download PDF
3. Lung Nodule Malignancy Classification Based ON NLSTx Data.
- Author
-
Benjamin Veasey, Mohammad Mehdi Farhangi, Hichem Frigui, Justin Broadhead, Michael Dahle, Aria Pezeshk, Albert Seow, and Amir A. Amini
- Published
- 2020
- Full Text
- View/download PDF
4. Test Data Reuse for the Evaluation of Continuously Evolving Classification Algorithms Using the Area under the Receiver Operating Characteristic Curve.
- Author
-
Alexej Gossmann, Aria Pezeshk, Yu-Ping Wang 0002, and Berkman Sahiner
- Published
- 2021
- Full Text
- View/download PDF
5. Mapping DNN Embedding Manifolds for Network Generalization Prediction.
- Author
-
Molly O'Brien, Julia V. Bukowski, Mathias Unberath, Aria Pezeshk, and Greg Hager
- Published
- 2022
6. 3-D Convolutional Neural Networks for Automatic Detection of Pulmonary Nodules in Chest CT.
- Author
-
Aria Pezeshk, Sardar Hamidian, Nicholas Petrick, and Berkman Sahiner
- Published
- 2019
- Full Text
- View/download PDF
7. Seamless Lesion Insertion for Data Augmentation in CAD Training.
- Author
-
Aria Pezeshk, Nicholas Petrick, Weijie Chen 0007, and Berkman Sahiner
- Published
- 2017
- Full Text
- View/download PDF
8. Semi‐supervised training using cooperative labeling of weakly annotated data for nodule detection in chest CT
- Author
-
Michael Maynord, M. Mehdi Farhangi, Cornelia Fermüller, Yiannis Aloimonos, Gary Levine, Nicholas Petrick, Berkman Sahiner, and Aria Pezeshk
- Subjects
General Medicine - Published
- 2023
- Full Text
- View/download PDF
9. Reducing overfitting of a deep learning breast mass detection algorithm in mammography using synthetic images.
- Author
-
Kenny H. Cha, Nicholas Petrick, Aria Pezeshk, Christian G. Graff, Diksha Sharma, Andreu Badal, Aldo Badano, and Berkman Sahiner
- Published
- 2019
- Full Text
- View/download PDF
10. Improved Multi Angled Parallelism for separation of text from intersecting linear features in scanned topographic maps.
- Author
-
Aria Pezeshk and Richard L. Tutwiler
- Published
- 2010
- Full Text
- View/download PDF
11. Seamless Insertion of Pulmonary Nodules in Chest CT Images.
- Author
-
Aria Pezeshk, Berkman Sahiner, Rongping Zeng, Adam Wunderlich, Weijie Chen 0007, and Nicholas Petrick
- Published
- 2015
- Full Text
- View/download PDF
12. A database for assessment of effect of lossy compression on digital mammograms.
- Author
-
Jiheng Wang, Berkman Sahiner, Nicholas Petrick, and Aria Pezeshk
- Published
- 2018
- Full Text
- View/download PDF
13. Test data reuse for evaluation of adaptive machine learning algorithms: over-fitting to a fixed 'test' dataset and a potential solution.
- Author
-
Alexej Gossmann, Aria Pezeshk, and Berkman Sahiner
- Published
- 2018
- Full Text
- View/download PDF
14. Towards the use of computationally inserted lesions for mammographic CAD assessment.
- Author
-
Zahra Ghanian, Aria Pezeshk, Nicholas Petrick, and Berkman Sahiner
- Published
- 2018
- Full Text
- View/download PDF
15. Automatic Feature Extraction and Text Recognition From Scanned Topographic Maps.
- Author
-
Aria Pezeshk and Richard L. Tutwiler
- Published
- 2011
- Full Text
- View/download PDF
16. 3D convolutional neural network for automatic detection of lung nodules in chest CT.
- Author
-
Sardar Hamidian, Berkman Sahiner, Nicholas Petrick, and Aria Pezeshk
- Published
- 2017
- Full Text
- View/download PDF
17. Seamless lesion insertion in digital mammography: methodology and reader study.
- Author
-
Aria Pezeshk, Nicholas Petrick, and Berkman Sahiner
- Published
- 2016
- Full Text
- View/download PDF
18. Semi-parametric estimation of the area under the precision-recall curve.
- Author
-
Berkman Sahiner, Weijie Chen 0007, Aria Pezeshk, and Nicholas Petrick
- Published
- 2016
- Full Text
- View/download PDF
19. 3-D Convolutional Neural Networks for Automatic Detection of Pulmonary Nodules in Chest CT
- Author
-
Berkman Sahiner, Sardar Hamidian, Aria Pezeshk, and Nicholas Petrick
- Subjects
Nodule detection ,Lung Neoplasms ,Computer science ,Chest ct ,Health Informatics ,02 engineering and technology ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,Imaging, Three-Dimensional ,0302 clinical medicine ,Health Information Management ,0202 electrical engineering, electronic engineering, information engineering ,False positive paradox ,Humans ,Sensitivity (control systems) ,Electrical and Electronic Engineering ,Medical imaging data ,Lung ,Artificial neural network ,business.industry ,Solitary Pulmonary Nodule ,Pattern recognition ,Computer Science Applications ,Radiographic Image Interpretation, Computer-Assisted ,020201 artificial intelligence & image processing ,Neural Networks, Computer ,Artificial intelligence ,Tomography ,Tomography, X-Ray Computed ,business - Abstract
Deep two-dimensional (2-D) convolutional neural networks (CNNs) have been remarkably successful in producing record-breaking results in a variety of computer vision tasks. It is possible to extend CNNs to three dimensions using 3-D kernels to make them suitable for volumetric medical imaging data such as CT or MRI, but this increases the processing time as well as the required number of training samples (due to the higher number of parameters that need to be learned). In this paper, we address both of these issues for a 3-D CNN implementation through the development of a two-stage computer-aided detection system for automatic detection of pulmonary nodules. The first stage consists of a 3-D fully convolutional network for fast screening and generation of candidate suspicious regions. The second stage consists of an ensemble of 3-D CNNs trained using extensive transformations applied to both the positive and negative patches to augment the training set. To enable the second stage classifiers to learn differently, they are trained on false positive patches obtained from the screening model using different thresholds on their associated scores as well as different augmentation types. The networks in the second stage are averaged together to produce the final classification score for each candidate patch. Using this procedure, our overall nodule detection system called DeepMed is fast and can achieve 91% sensitivity at 2 false positives per scan on cases from the LIDC dataset.
- Published
- 2019
- Full Text
- View/download PDF
20. Evaluation of Simulated Lesions as Surrogates to Clinical Lesions for Thoracic CT Volumetry: The Results of an International Challenge
- Author
-
Jayashree Kalpathy-Cramer, Aria Pezeshk, Rudresh Jarecha, Nicholas Petrick, Maria Athelogou, Marthony Robins, Andrew J. Buckler, Berkman Sahiner, Nancy A. Obuchowski, and Ehsan Samei
- Subjects
Lung Neoplasms ,Quantitative imaging ,Databases, Factual ,Article ,Imaging phantom ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Humans ,Thoracic ct ,Medicine ,Radiology, Nuclear Medicine and imaging ,Segmentation ,Lung ,Equivalence (measure theory) ,Reproducibility ,Phantoms, Imaging ,business.industry ,Reproducibility of Results ,Repeatability ,Cone-Beam Computed Tomography ,Confidence interval ,030220 oncology & carcinogenesis ,business ,Nuclear medicine ,Algorithms - Abstract
RATIONALE AND OBJECTIVES To evaluate a new approach to establish compliance of segmentation tools with the computed tomography volumetry profile of the Quantitative Imaging Biomarker Alliance (QIBA); and determine the statistical exchangeability between real and simulated lesions through an international challenge. MATERIALS AND METHODS The study used an anthropomorphic phantom with 16 embedded physical lesions and 30 patient cases from the Reference Image Database to Evaluate Therapy Response with pathologically confirmed malignancies. Hybrid datasets were generated by virtually inserting simulated lesions corresponding to physical lesions into the phantom datasets using one projection-domain-based method (Method 1), two image-domain insertion methods (Methods 2 and 3), and simulated lesions corresponding to real lesions into the Reference Image Database to Evaluate Therapy Response dataset (using Method 2). The volumes of the real and simulated lesions were compared based on bias (measured mean volume differences between physical and virtually inserted lesions in phantoms as quantified by segmentation algorithms), repeatability, reproducibility, equivalence (phantom phase), and overall QIBA compliance (phantom and clinical phase). RESULTS For phantom phase, three of eight groups were fully QIBA compliant, and one was marginally compliant. For compliant groups, the estimated biases were -1.8 ± 1.4%, -2.5 ± 1.1%, -3 ± 1%, -1.8 ± 1.5% (±95% confidence interval). No virtual insertion method showed statistical equivalence to physical insertion in bias equivalence testing using Schuirmann's two one-sided test (±5% equivalence margin). Differences in repeatability and reproducibility across physical and simulated lesions were largely comparable (0.1%-16% and 7%-18% differences, respectively). For clinical phase, 7 of 16 groups were QIBA compliant. CONCLUSION Hybrid datasets yielded conclusions similar to real computed tomography datasets where phantom QIBA compliant was also compliant for hybrid datasets. Some groups deemed compliant for simulated methods, not for physical lesion measurements. The magnitude of this difference was small (
- Published
- 2019
- Full Text
- View/download PDF
21. Improving CAD performance by seamless insertion of pulmonary nodules in chest CT exams.
- Author
-
Aria Pezeshk, Berkman Sahiner, Weijie Chen 0007, and Nicholas Petrick
- Published
- 2015
- Full Text
- View/download PDF
22. Automatic lung nodule detection in thoracic CT scans using dilated slice-wise convolutions
- Author
-
Nicholas Petrick, M. Mehdi Farhangi, Aria Pezeshk, and Berkman Sahiner
- Subjects
Lung Neoplasms ,Computer science ,business.industry ,Aggregate (data warehouse) ,Pattern recognition ,General Medicine ,ENCODE ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,Reduction (complexity) ,03 medical and health sciences ,0302 clinical medicine ,Computer Systems ,030220 oncology & carcinogenesis ,False positive paradox ,Medical imaging ,Humans ,Artificial intelligence ,Sensitivity (control systems) ,Neural Networks, Computer ,business ,Tomography, X-Ray Computed ,Lung ,Volume (compression) - Abstract
Purpose Most state-of-the-art automated medical image analysis methods for volumetric data rely on adaptations of two-dimensional (2D) and three-dimensional (3D) convolutional neural networks (CNNs). In this paper, we develop a novel unified CNN-based model that combines the benefits of 2D and 3D networks for analyzing volumetric medical images. Methods In our proposed framework, multiscale contextual information is first extracted from 2D slices inside a volume of interest (VOI). This is followed by dilated 1D convolutions across slices to aggregate in-plane features in a slice-wise manner and encode the information in the entire volume. Moreover, we formalize a curriculum learning strategy for a two-stage system (i.e., a system that consists of screening and false positive reduction), where the training samples are presented to the network in a meaningful order to further improve the performance. Results We evaluated the proposed approach by developing a computer-aided detection (CADe) system for lung nodules. Our results on 888 CT exams demonstrate that the proposed approach can effectively analyze volumetric data by achieving a sensitivity of > 0.99 in the screening stage and a sensitivity of > 0.96 at eight false positives per case in the false positive reduction stage. Conclusion Our experimental results show that the proposed method provides competitive results compared to state-of-the-art 3D frameworks. In addition, we illustrate the benefits of curriculum learning strategies in two-stage systems that are of common use in medical imaging applications.
- Published
- 2021
23. Lung Nodule Malignancy Classification Based ON NLSTx Data
- Author
-
Aria Pezeshk, Hichem Frigui, Albert Seow, Benjamin Veasey, Justin Broadhead, Michael Dahle, M. Mehdi Farhangi, and Amir A. Amini
- Subjects
business.industry ,Computer science ,Nodule (medicine) ,Pattern recognition ,medicine.disease ,Malignancy ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Recurrent neural network ,Feature (computer vision) ,Computer-aided diagnosis ,030220 oncology & carcinogenesis ,medicine ,National Lung Screening Trial ,Artificial intelligence ,medicine.symptom ,business ,Lung cancer - Abstract
While several datasets containing CT images of lung nodules exist, they do not contain definitive diagnoses and often rely on radiologists' visual assessment for malignancy rating. This is in spite of the fact that lung cancer is one of the top three most frequently misdiagnosed diseases based on visual assessment. In this paper, we propose a dataset of difficult-to-diagnose lung nodules based on data from the National Lung Screening Trial (NLST), which we refer to as NLSTx. In NLSTx, each malignant nodule has a definitive ground truth label from biopsy. Herein, we also propose a novel deep convolutional neural network (CNN) / recurrent neural network framework that allows for use of pre-trained 2-D convolutional feature extractors, similar to those developed in the ImageNet challenge. Our results show that the proposed framework achieves comparable performance to an equivalent 3-D CNN while requiring half the number of parameters.
- Published
- 2020
- Full Text
- View/download PDF
24. Deep learning in medical imaging and radiation therapy
- Author
-
Karen Drukker, Ronald M. Summers, Lubomir M. Hadjiiski, Aria Pezeshk, Kenny H. Cha, Xiaosong Wang, Berkman Sahiner, and Maryellen L. Giger
- Subjects
Diagnostic Imaging ,Radiotherapy ,Computer science ,business.industry ,medicine.medical_treatment ,Deep learning ,Review Article ,General Medicine ,Signal-To-Noise Ratio ,Data science ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,Radiation therapy ,03 medical and health sciences ,Deep Learning ,0302 clinical medicine ,030220 oncology & carcinogenesis ,Image Processing, Computer-Assisted ,Medical imaging ,medicine ,Humans ,Segmentation ,Artificial intelligence ,Artifacts ,business - Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
- Published
- 2018
- Full Text
- View/download PDF
25. Seamless insertion of real pulmonary nodules in chest CT exams.
- Author
-
Aria Pezeshk, Berkman Sahiner, Rongping Zeng, Adam Wunderlich, Weijie Chen 0007, and Nicholas Petrick
- Published
- 2014
- Full Text
- View/download PDF
26. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT
- Author
-
Kingshuk Roy Choudhury, Martin Sedlmair, Berkman Sahiner, Pooyan Sahbaee, Ehsan Samei, Justin Solomon, Marthony Robins, and Aria Pezeshk
- Subjects
Pathology ,medicine.medical_specialty ,Lung Neoplasms ,Tomography Scanners, X-Ray Computed ,Partial volume ,Iterative reconstruction ,Article ,Imaging phantom ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Region of interest ,medicine ,Humans ,Radiology, Nuclear Medicine and imaging ,Projection (set theory) ,Radiological and Ultrasound Technology ,Phantoms, Imaging ,Orientation (computer vision) ,Solitary Pulmonary Nodule ,Nodule (medicine) ,Hausdorff distance ,030220 oncology & carcinogenesis ,Linear Models ,Radiographic Image Interpretation, Computer-Assisted ,medicine.symptom ,Tomography, X-Ray Computed ,Biomedical engineering - Abstract
Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (MeanRHD, and STDRHD CVRHD) of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (CVRHD) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology.
- Published
- 2017
- Full Text
- View/download PDF
27. Recurrent attention network for false positive reduction in the detection of pulmonary nodules in thoracic CT scans
- Author
-
Aria Pezeshk, M. Mehdi Farhangi, Berkman Sahiner, Amir A. Amini, Nicholas Petrick, and Hichem Frigui
- Subjects
Lung Neoplasms ,business.industry ,Computer science ,Pattern recognition ,General Medicine ,Convolutional neural network ,Sensitivity and Specificity ,030218 nuclear medicine & medical imaging ,Reduction (complexity) ,03 medical and health sciences ,0302 clinical medicine ,Recurrent neural network ,Computer-aided diagnosis ,030220 oncology & carcinogenesis ,False positive paradox ,Medical imaging ,Image Processing, Computer-Assisted ,Humans ,False Positive Reactions ,Radiography, Thoracic ,Sensitivity (control systems) ,Artificial intelligence ,Neural Networks, Computer ,business ,Tomography, X-Ray Computed - Abstract
Purpose Multiview two-dimensional (2D) convolutional neural networks (CNNs) and three-dimensional (3D) CNNs have been successfully used for analyzing volumetric data in many state-of-the-art medical imaging applications. We propose an alternative modular framework that analyzes volumetric data with an approach that is analogous to radiologists' interpretation, and apply the framework to reduce false positives that are generated in computer-aided detection (CADe) systems for pulmonary nodules in thoracic computed tomography (CT) scans. Methods In our approach, a deep network consisting of 2D CNNs first processes slices individually. The features extracted in this stage are then passed to a recurrent neural network (RNN), thereby modeling consecutive slices as a sequence of temporal data and capturing the contextual information across all three dimensions in the volume of interest. Outputs of the RNN layer are weighed before the final fully connected layer, enabling the network to scale the importance of different slices within a volume of interest in an end-to-end training framework. Results We validated the proposed architecture on the false positive reduction track of the lung nodule analysis (LUNA) challenge for pulmonary nodule detection in chest CT scans, and obtained competitive results compared to 3D CNNs. Our results show that the proposed approach can encode the 3D information in volumetric data effectively by achieving a sensitivity >0.8 with just 1/8 false positives per scan. Conclusions Our experimental results demonstrate the effectiveness of temporal analysis of volumetric images for the application of false positive reduction in chest CT scans and show that state-of-the-art 2D architectures from the literature can be directly applied to analyzing volumetric medical data. As newer and better 2D architectures are being developed at a much faster rate compared to 3D architectures, our approach makes it easy to obtain state-of-the-art performance on volumetric data using new 2D architectures.
- Published
- 2019
28. Calibration of medical diagnostic classifier scores to the probability of disease
- Author
-
Frank W. Samuelson, Aria Pezeshk, Berkman Sahiner, Weijie Chen, and Nicholas Petrick
- Subjects
Statistics and Probability ,Medical diagnostic ,Epidemiology ,Computer science ,Statistics as Topic ,rationality ,02 engineering and technology ,Disease ,Machine learning ,computer.software_genre ,01 natural sciences ,Clinical decision support system ,Statistics, Nonparametric ,Article ,010104 statistics & probability ,Health Information Management ,probability of disease ,Diagnosis ,Confidence Intervals ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,0101 mathematics ,classifier ,Probability ,business.industry ,Diagnostic test ,Pattern recognition ,Sample Size ,Calibration ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Classifier (UML) ,computer - Abstract
Scores produced by statistical classifiers in many clinical decision support systems and other medical diagnostic devices are generally on an arbitrary scale, so the clinical meaning of these scores is unclear. Calibration of classifier scores to a meaningful scale such as the probability of disease is potentially useful when such scores are used by a physician. In this work, we investigated three methods (parametric, semi-parametric, and non-parametric) for calibrating classifier scores to the probability of disease scale and developed uncertainty estimation techniques for these methods. We showed that classifier scores on arbitrary scales can be calibrated to the probability of disease scale without affecting their discrimination performance. With a finite dataset to train the calibration function, it is important to accompany the probability estimate with its confidence interval. Our simulations indicate that, when a dataset used for finding the transformation for calibration is also used for estimating the performance of calibration, the resubstitution bias exists for a performance metric involving the truth states in evaluating the calibration performance. However, the bias is small for the parametric and semi-parametric methods when the sample size is moderate to large (>100 per class).
- Published
- 2016
- Full Text
- View/download PDF
29. Computational insertion of microcalcification clusters on mammograms: reader differentiation from native clusters and computer-aided detection comparison
- Author
-
Nicholas Petrick, Aria Pezeshk, Berkman Sahiner, and Zahra Ghanian
- Subjects
Digital mammography ,medicine.diagnostic_test ,Receiver operating characteristic ,business.industry ,Pattern recognition ,01 natural sciences ,Computer aided detection ,Computer-Aided Diagnosis ,030218 nuclear medicine & medical imaging ,010309 optics ,03 medical and health sciences ,0302 clinical medicine ,Microcalcification clusters ,Computer-aided diagnosis ,0103 physical sciences ,Medical imaging ,medicine ,Image acquisition ,Mammography ,Radiology, Nuclear Medicine and imaging ,Artificial intelligence ,business - Abstract
Mammographic computer-aided detection (CADe) devices are typically first developed and assessed for a specific “original” acquisition system. When developers are ready to apply their CADe device to a mammographic acquisition system, they typically assess the device with images acquired using the system. Collecting large repositories of clinical images containing verified lesion locations acquired by a system is costly and time consuming. We previously developed an image blending technique that allows users to seamlessly insert regions of interest (ROIs) from one medical image into another image. Our goal is to assess the performance of this technique for inserting microcalcification clusters from one mammogram into another, with the idea that when fully developed, our technique may be useful for reducing the clinical data burden in the assessment of a CADe device for use with an image acquisition system. We first perform a reader study to assess whether experienced observers can distinguish between computationally inserted and native clusters. For this purpose, we apply our insertion technique to 55 clinical cases. ROIs containing microcalcification clusters from one breast of a patient are inserted into the contralateral breast of the same patient. The analysis of the reader ratings using receiver operating characteristic (ROC) methodology indicates that inserted clusters cannot be reliably distinguished from native clusters (area under the ROC [Formula: see text]). Furthermore, CADe sensitivity is evaluated on mammograms of 68 clinical cases with native and inserted microcalcification clusters using a commercial CADe system. The average by-case sensitivities for native and inserted clusters are equal, 85.3% (58/68). The average by-image sensitivities for native and inserted clusters are 72.3% and 67.6%, respectively, with a difference of 4.7% and a 95% confidence interval of [[Formula: see text] 11.6]. These results demonstrate the potential for using the inserted microcalcification clusters for assessing mammographic CADe devices.
- Published
- 2018
30. Evaluation of data augmentation via synthetic images for improved breast mass detection on mammograms using deep learning
- Author
-
Andreu Badal, Aria Pezeshk, Nicholas Petrick, Kenny H. Cha, Christian G. Graff, Berkman Sahiner, and Diksha Sharma
- Subjects
Digital mammography ,Receiver operating characteristic ,medicine.diagnostic_test ,business.industry ,Breast imaging ,Deep learning ,Pattern recognition ,Overfitting ,030218 nuclear medicine & medical imaging ,Data set ,03 medical and health sciences ,0302 clinical medicine ,Special Section on Evaluation Methodologies for Clinical AI ,030220 oncology & carcinogenesis ,Medical imaging ,Medicine ,Mammography ,Radiology, Nuclear Medicine and imaging ,Artificial intelligence ,skin and connective tissue diseases ,business - Abstract
We evaluated whether using synthetic mammograms for training data augmentation may reduce the effects of overfitting and increase the performance of a deep learning algorithm for breast mass detection. Synthetic mammograms were generated using in silico procedural analytic breast and breast mass modeling algorithms followed by simulated x-ray projections of the breast models into mammographic images. In silico breast phantoms containing masses were modeled across the four BI-RADS breast density categories, and the masses were modeled with different sizes, shapes, and margins. A Monte Carlo-based x-ray transport simulation code, MC-GPU, was used to project the three-dimensional phantoms into realistic synthetic mammograms. 2000 mammograms with 2522 masses were generated to augment a real data set during training. From the Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) data set, we used 1111 mammograms (1198 masses) for training, 120 mammograms (120 masses) for validation, and 361 mammograms (378 masses) for testing. We used faster R-CNN for our deep learning network with pretraining from ImageNet using the Resnet-101 architecture. We compared the detection performance when the network was trained using different percentages of the real CBIS-DDSM training set (100%, 50%, and 25%), and when these subsets of the training set were augmented with 250, 500, 1000, and 2000 synthetic mammograms. Free-response receiver operating characteristic (FROC) analysis was performed to compare performance with and without the synthetic mammograms. We generally observed an improved test FROC curve when training with the synthetic images compared to training without them, and the amount of improvement depended on the number of real and synthetic images used in training. Our study shows that enlarging the training data with synthetic samples can increase the performance of deep learning systems.
- Published
- 2019
- Full Text
- View/download PDF
31. CT image quality evaluation for detection of signals with unknown location, size, contrast and shape using unsupervised methods
- Author
-
Berkman Sahiner, Aria Pezeshk, and Lucretiu M. Popescu
- Subjects
Radon transform ,business.industry ,Image quality ,Computer science ,Template matching ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Image processing ,Iterative reconstruction ,Signal ,Digital image processing ,Computer vision ,Artificial intelligence ,business ,Image restoration ,Feature detection (computer vision) - Abstract
The advent of new image reconstruction and image processing techniques for CT images has increased the need for robust objective image quality assessment methods. One of the most common quality assessment methods is the measurement of signal detectability for a known signal at a known location using supervised classification techniques. However, this method requires a large number of simulations or physical measurements, and its underlying assumptions may be considered clinically unrealistic. In this study we focus on objective assessment of image quality in terms of detection of a signal with unknown location, size, shape, and contrast. We explore several unsupervised saliency detection methods which assume no knowledge about the signal, along with a template matching technique which uses information about the signal's size and shape in the object domain, for simulated phantoms that have been reconstructed using filtered back projection (FBP) and iterative reconstruction algorithms (IRA). The performance of each of the image reconstruction algorithms is then measured using the area under the localization receiver operating characteristic curve (LROC) and exponential transformation of the free response operating characteristic curve (EFROC). Our results indicate that unsupervised saliency detection methods can be effectively used to determine image quality in terms of signal detectability for unknown signals given only a small number of sample images.
- Published
- 2015
- Full Text
- View/download PDF
32. Comparison of two stand-alone CADe systems at multiple operating points
- Author
-
Nicholas Petrick, Aria Pezeshk, Weijie Chen, and Berkman Sahiner
- Subjects
False discovery rate ,Operating point ,symbols.namesake ,Bonferroni correction ,Computer science ,Multiple comparisons problem ,symbols ,Algorithm ,Simulation ,Statistical hypothesis testing ,Type I and type II errors - Abstract
Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.
- Published
- 2015
- Full Text
- View/download PDF
33. Investigation of methods for calibration of classifier scores to probability of disease
- Author
-
Weijie Chen, Aria Pezeshk, Berkman Sahiner, Frank W. Samuelson, and Nicholas Petrick
- Subjects
Bayes' theorem ,Brier score ,Receiver operating characteristic ,Mean squared error ,Sample size determination ,business.industry ,Nonparametric statistics ,Isotonic regression ,Pattern recognition ,Artificial intelligence ,business ,Classifier (UML) ,Mathematics - Abstract
Classifier scores in many diagnostic devices, such as computer-aided diagnosis systems, are usually on an arbitrary scale, the meaning of which is unclear. Calibration of classifier scores to a meaningful scale such as the probability of disease is potentially useful when such scores are used by a physician or another algorithm. In this work, we investigated the properties of two methods for calibrating classifier scores to probability of disease. The first is a semiparametric method in which the likelihood ratio for each score is estimated based on a semiparametric proper receiver operating characteristic model, and then an estimate of the probability of disease is obtained using the Bayes theorem assuming a known prevalence of disease. The second method is nonparametric in which isotonic regression via the pool-adjacent-violators algorithm is used. We employed the mean square error (MSE) and the Brier score to evaluate the two methods. We evaluate the methods under two paradigms: (a) the dataset used to construct the score-to-probability mapping function is used to calculate the performance metric (MSE or Brier score) (resubstitution); (b) an independent test dataset is used to calculate the performance metric (independent). Under our simulation conditions, the semiparametric method is found to be superior to the nonparametric method at small to medium sample sizes and the two methods appear to converge at large sample sizes. Our simulation results also indicate that the resubstitution bias may depend on the performance metric and, for the semiparametric method, the resubstitution bias is small when a reasonable number of cases (> 100 cases per class) are available.
- Published
- 2015
- Full Text
- View/download PDF
34. The Use of Lossy Compression of Digital Mammograms for Primary Interpretation and Image Retention
- Author
-
Aria Pezeshk and David L. Lerner
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,General Medicine ,Lossy compression ,Image (mathematics) ,Interpretation (model theory) ,Medicine ,Mammography ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Radiographic Image Enhancement ,Artificial intelligence ,Radiology ,business ,Data compression - Published
- 2015
- Full Text
- View/download PDF
35. Character Degradation Model and HMM Word Recognition System for Text Extracted from Maps
- Author
-
Aria Pezeshk and Richard L. Tutwiler
- Subjects
Geographic information system ,Information retrieval ,business.industry ,Computer science ,Feature extraction ,computer.file_format ,computer.software_genre ,Set (abstract data type) ,Digital image ,Artificial intelligence ,Zoom ,Raster graphics ,business ,computer ,Natural language processing ,Digitization ,Graphics tablet - Abstract
Geographic maps are one of the most abundant and valuable sources of accurate information about various features of bodies of land and water. Due to their importance in applications ranging from city planning and navigation to tracking changes in vegetation cover and coastline erosion, most countries have established dedicated organizations that are responsible for the production and maintenance of maps that cover their entire territories. This has resulted in the production and availability of a tremendous quantity of useful information about every part of the world. Aswith other types of documents, newmaps are nowadays produced in specialized computer programs, and are easy to manage and update since the individual information layers that form the whole map are available to the map producer. In addition to the change in the method of production, the applications and ways to process maps have also changed in the past few decades. Geographic Information Systems (GIS) have provided new means to analyze, process, visualize, and integrate various forms of geographic data. The input to these systems can be satellite or aerial imagery, remote sensing data (e.g. LIDAR images), raster or vector representations of geographic maps, and any other type of data that are related to locations (e.g. local populations, crime statistics, and textual information). Due to their wide availability, accuracy, and relative cheapness compared to other types of geo-referenced data, geographic maps are probably the most widely used source of information for GIS users. However, the majority of existing geographic maps exist only in printed form. This means that unlike the case for computer generated maps, printed maps cannot be directly used as the input to a GIS since both the end users and the map producers only have access to the dense and complex mixture of regularly intersecting and/or overlapping set of graphical and textual elements rather than the individual features of interest. Currently the only reliable way of converting printed maps into computer readable format is to have a highly trained operator manually extract the individual sets of features (graphical and textual). The manual feature extraction methods consist of digitization using a digitizing tablet, and heads-up digitizing. In the first method, the paper map is placed on top of the digitizing tablet, and the operator traces over lines and other objects of interest using a stylus or a digitizing puck (a device with crosshairs and multiple buttons that enable data entry operations). In heads-up digitizing (otherwise known as on-screen digitizing) on the other hand, the paper map is first scanned into a digital image. The operator then traces over every single object of interest on the computer screen using a mouse. Since zooming into difficult 4
- Published
- 2011
- Full Text
- View/download PDF
36. Extended character defect model for recognition of text from maps
- Author
-
Aria Pezeshk and Richard L. Tutwiler
- Subjects
Text mining ,Character (mathematics) ,Pixel ,Computer science ,business.industry ,The Intersect ,Feature extraction ,Pattern recognition ,Artificial intelligence ,Hidden Markov model ,business ,Character recognition ,Task (project management) - Abstract
Topographic maps contain a small amount of text compared to other forms of printed documents. Furthermore, the text and graphical components typically intersect with one another thus making the extraction of text a very difficult task. Creating training sets with a suitable size from the actual characters in maps would therefore require the laborious processing of many maps with similar features and the manual extraction of character samples. This paper extends the types of defects represented by Baird's document image degradation model in order to create pseudo randomly generated training sets that closely mimic the various artifacts and defects encountered in characters extracted from maps. Two Hidden Markov Models are then trained and used to recognize the text. Tests performed on extracted street labels show an improvement in performance from 88.4% when only the original Baird's model is used to a character recognition rate of 93.2% when the extended defect model is used for training.
- Published
- 2010
- Full Text
- View/download PDF
37. Contour Line Recognition & Extraction from Scanned Colour Maps Using Dual Quantization of the Intensity Image
- Author
-
Richard L. Tutwiler and Aria Pezeshk
- Subjects
Pixel ,Computer science ,business.industry ,Feature extraction ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Image processing ,Image segmentation ,Aliasing ,Contour line ,Computer vision ,Adaptive histogram equalization ,Artificial intelligence ,business ,Quantization (image processing) - Abstract
Automatic separation of the different layers of information in maps poses an immense challenge due to the heavy interconnectedness of these layers. This process is further complicated by the problem of mixed color pixels and aliasing induced by the scanning process. In this paper we present a new semiautomatic method to extract contour lines from scanned color images of topographic maps. In the proposed method, contour lines are removed from the image using a novel algorithm based on quantization of the intensity image followed by contrast limited adaptive histogram equalization. Unlike other interactive map feature extraction methods, in the proposed algorithm the user is involved in only one simple step of the feature extraction process and no prior knowledge about the underlying image processing steps is required. This method is incorporated as a .NET API plugin into ArcGIS (a commercially available Geographic Information System (GIS)) and its performance is tested on a number of graphics rich map samples of various sources.
- Published
- 2008
- Full Text
- View/download PDF
38. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT.
- Author
-
Marthony Robins, Justin Solomon, Pooyan Sahbaee, Martin Sedlmair, Kingshuk Roy Choudhury, Aria Pezeshk, Berkman Sahiner, and Ehsan Samei
- Subjects
PULMONARY nodules ,MORPHOMETRICS ,CANCER tomography - Abstract
Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDI
vol ). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (, and of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the () for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.