40 results on '"Mazurowski, MA"'
Search Results
2. Deep learning-based features of breast MRI for prediction of occult invasive disease following a diagnosis of ductal carcinoma in situ: Preliminary data
- Author
-
Petrick, N, Mori, K, Zhu, Z, Harowicz, M, Zhang, Jun, Saha, A, Grimm, LJ, Hwang, S, Mazurowski, MA, Petrick, N, Mori, K, Zhu, Z, Harowicz, M, Zhang, Jun, Saha, A, Grimm, LJ, Hwang, S, and Mazurowski, MA
- Published
- 2018
3. Convolutional encoder-decoder for breast mass segmentation in digital breast tomosynthesis
- Author
-
Zhang, Jun, Ghate, SV, Grimm, LJ, Saha, A, Cain, EH, Zhu, Z, Mazurowski, MA, Zhang, Jun, Ghate, SV, Grimm, LJ, Saha, A, Cain, EH, Zhu, Z, and Mazurowski, MA
- Abstract
Digital breast tomosynthesis (DBT) is a relatively new modality for breast imaging that can provide detailed assessment of dense tissue within the breast. In the domains of cancer diagnosis, radiogenomics, and resident education, it is important to accurately segment breast masses. However, breast mass segmentation is a very challenging task, since mass regions have low contrast difference between their neighboring tissues. Notably, the task might become more difficult in cases that were assigned BI-RADS 0 category since this category includes many lesions that are of low conspicuity and locations that were deemed to be overlapping normal tissue upon further imaging and were not sent to biopsy. Segmentation of such lesions is of particular importance in the domain of reader performance analysis and education. In this paper, we propose a novel deep learning-based method for segmentation of BI-RADS 0 lesions in DBT. The key components of our framework are an encoding path for local-to-global feature extraction, and a decoding patch to expand the images. To address the issue of limited training data, in the training stage, we propose to sample patches not only in mass regions but also in non-mass regions. We utilize a Dice-like loss function in the proposed network to alleviate the class-imbalance problem. The preliminary results on 40 subjects show promise of our method. In addition to quantitative evaluation of the method, we present a visualization of the results that demonstrate both the performance of the algorithm as well as the difficulty of the task at hand.
- Published
- 2018
4. Breast mass detection in mammography and tomosynthesis via fully convolutional network-based heatmap regression
- Author
-
Petrick, N, Mori, K, Zhang, Jun, Cain, EH, Saha, A, Zhu, Z, Mazurowski, MA, Petrick, N, Mori, K, Zhang, Jun, Cain, EH, Saha, A, Zhu, Z, and Mazurowski, MA
- Published
- 2018
5. Breast cancer molecular subtype classification using deep features: Preliminary results
- Author
-
Petrick, N, Mori, K, Zhu, Z, Albadawy, E, Saha, A, Zhang, Jun, Harowicz, MR, Mazurowski, MA, Petrick, N, Mori, K, Zhu, Z, Albadawy, E, Saha, A, Zhang, Jun, Harowicz, MR, and Mazurowski, MA
- Published
- 2018
6. Identifying error-making patterns in assessment of mammographic BI-RADS descriptors among radiology residents using statistical pattern recognition.
- Author
-
Mazurowski MA, Barnhart HX, Baker JA, Tourassi GD, Mazurowski, Maciej A, Barnhart, Huiman X, Baker, Jay A, and Tourassi, Georgia D
- Abstract
Rationale and Objective: The objective of this study is to test the hypothesis that there are patterns in erroneous assessment of BI-RADS features among radiology trainees when interpreting mammographic masses and that these patterns can be captured in individualized statistical user models. Identifying these patterns could be useful in personalizing and adapting educational material to complement the individual weaknesses of each trainee during his or her mammography education.Materials and Methods: Reading data of 33 mammographic cases containing masses was used. The cases were individually described by 10 radiology residents using four BI-RADS features: mass shape, mass margin, mass density and parenchyma density. For each resident, an individual model was automatically constructed that predicts likelihood (HIGH or LOW) of erroneously assigning each BI-RADS descriptor by the resident. Error was defined as deviation of the resident's assessment from the expert assessments. We evaluated the predictive performance of the models using leave-one-out crossvalidation.Results: The user models were able to predict which assessments have higher likelihood of error. The proportion of actual errors to the number of situations in which these errors could potentially occur was significantly higher (P < .05) when user-model assigned HIGH likelihood of error than when LOW likelihood of error was assigned for three of the four BI-RADS features. Overall, the difference between the HIGH and LOW likelihood of error groups was statistically significant (P < .0001) combining all four features.Conclusion: Error making in BI-RADS descriptor assessment appears to follow patterns that can be captured with statistical pattern recognition-based user models. [ABSTRACT FROM AUTHOR]- Published
- 2012
- Full Text
- View/download PDF
7. Pilot study of machine learning for detection of placenta accreta spectrum.
- Author
-
Zhang Y, Ellestad SC, Gilner JB, Pyne A, Boyd BK, Mazurowski MA, and Gatta LA
- Subjects
- Humans, Female, Pregnancy, Pilot Projects, Ultrasonography, Prenatal, Adult, Placenta Accreta diagnostic imaging, Machine Learning
- Published
- 2024
- Full Text
- View/download PDF
8. A publicly available deep learning model and dataset for segmentation of breast, fibroglandular tissue, and vessels in breast MRI.
- Author
-
Lew CO, Harouni M, Kirksey ER, Kang EJ, Dong H, Gu H, Grimm LJ, Walsh R, Lowell DA, and Mazurowski MA
- Subjects
- Humans, Female, Magnetic Resonance Imaging, Radiography, Breast Density, Deep Learning, Breast Neoplasms diagnostic imaging
- Abstract
Breast density, or the amount of fibroglandular tissue (FGT) relative to the overall breast volume, increases the risk of developing breast cancer. Although previous studies have utilized deep learning to assess breast density, the limited public availability of data and quantitative tools hinders the development of better assessment tools. Our objective was to (1) create and share a large dataset of pixel-wise annotations according to well-defined criteria, and (2) develop, evaluate, and share an automated segmentation method for breast, FGT, and blood vessels using convolutional neural networks. We used the Duke Breast Cancer MRI dataset to randomly select 100 MRI studies and manually annotated the breast, FGT, and blood vessels for each study. Model performance was evaluated using the Dice similarity coefficient (DSC). The model achieved DSC values of 0.92 for breast, 0.86 for FGT, and 0.65 for blood vessels on the test set. The correlation between our model's predicted breast density and the manually generated masks was 0.95. The correlation between the predicted breast density and qualitative radiologist assessment was 0.75. Our automated models can accurately segment breast, FGT, and blood vessels using pre-contrast breast MRI data. The data and the models were made publicly available., (© 2024. The Author(s).)
- Published
- 2024
- Full Text
- View/download PDF
9. SWSSL: Sliding Window-Based Self-Supervised Learning for Anomaly Detection in High-Resolution Images.
- Author
-
Dong H, Zhang Y, Gu H, Konz N, Zhang Y, and Mazurowski MA
- Subjects
- Supervised Machine Learning, Algorithms, Neural Networks, Computer
- Abstract
Anomaly detection (AD) aims to determine if an instance has properties different from those seen in normal cases. The success of this technique depends on how well a neural network learns from normal instances. We observe that the learning difficulty scales exponentially with the input resolution, making it infeasible to apply AD to high-resolution images. Resizing them to a lower resolution is a compromising solution and does not align with clinical practice where the diagnosis could depend on image details. In this work, we propose to train the network and perform inference at the patch level, through the sliding window algorithm. This simple operation allows the network to receive high-resolution images but introduces additional training difficulties, including inconsistent image structure and higher variance. We address these concerns by setting the network's objective to learn augmentation-invariant features. We further study the augmentation function in the context of medical imaging. In particular, we observe that the resizing operation, a key augmentation in general computer vision literature, is detrimental to detection accuracy, and the inverting operation can be beneficial. We also propose a new module that encourages the network to learn from adjacent patches to boost detection performance. Extensive experiments are conducted on breast tomosynthesis and chest X-ray datasets and our method improves 8.03% and 5.66% AUC on image-level classification respectively over the current leading techniques. The experimental results demonstrate the effectiveness of our approach.
- Published
- 2023
- Full Text
- View/download PDF
10. Improving Image Classification of Knee Radiographs: An Automated Image Labeling Approach.
- Author
-
Zhang J, Santos C, Park C, Mazurowski MA, and Colglazier R
- Subjects
- Humans, Radiography, Arthroplasty, Knee Joint diagnostic imaging, Radiology
- Abstract
Large numbers of radiographic images are available in musculoskeletal radiology practices which could be used for training of deep learning models for diagnosis of knee abnormalities. However, those images do not typically contain readily available labels due to limitations of human annotations. The purpose of our study was to develop an automated labeling approach that improves the image classification model to distinguish normal knee images from those with abnormalities or prior arthroplasty. The automated labeler was trained on a small set of labeled data to automatically label a much larger set of unlabeled data, further improving the image classification performance for knee radiographic diagnosis. We used BioBERT and EfficientNet as the feature extraction backbone of the labeler and imaging model, respectively. We developed our approach using 7382 patients and validated it on a separate set of 637 patients. The final image classification model, trained using both manually labeled and pseudo-labeled data, had the higher weighted average AUC (WA-AUC 0.903) value and higher AUC values among all classes (normal AUC 0.894; abnormal AUC 0.896, arthroplasty AUC 0.990) compared to the baseline model (WA-AUC = 0.857; normal AUC 0.842; abnormal AUC 0.848, arthroplasty AUC 0.987), trained using only manually labeled data. Statistical tests show that the improvement is significant on normal (p value < 0.002), abnormal (p value < 0.001), and WA-AUC (p value = 0.001). Our findings demonstrated that the proposed automated labeling approach significantly improves the performance of image classification for radiographic knee diagnosis, allowing for facilitating patient care and curation of large knee datasets., (© 2023. The Author(s) under exclusive licence to Society for Imaging Informatics in Medicine.)
- Published
- 2023
- Full Text
- View/download PDF
11. MRI-based Deep Learning Assessment of Amyloid, Tau, and Neurodegeneration Biomarker Status across the Alzheimer Disease Spectrum.
- Author
-
Lew CO, Zhou L, Mazurowski MA, Doraiswamy PM, and Petrella JR
- Subjects
- Aged, Humans, Male, Amyloid, Amyloid beta-Peptides, Apolipoproteins E, Biomarkers, Magnetic Resonance Imaging methods, Positron-Emission Tomography methods, Retrospective Studies, tau Proteins, Female, Alzheimer Disease diagnostic imaging, Cognitive Dysfunction, Deep Learning
- Abstract
Background PET can be used for amyloid-tau-neurodegeneration (ATN) classification in Alzheimer disease, but incurs considerable cost and exposure to ionizing radiation. MRI currently has limited use in characterizing ATN status. Deep learning techniques can detect complex patterns in MRI data and have potential for noninvasive characterization of ATN status. Purpose To use deep learning to predict PET-determined ATN biomarker status using MRI and readily available diagnostic data. Materials and Methods MRI and PET data were retrospectively collected from the Alzheimer's Disease Imaging Initiative. PET scans were paired with MRI scans acquired within 30 days, from August 2005 to September 2020. Pairs were randomly split into subsets as follows: 70% for training, 10% for validation, and 20% for final testing. A bimodal Gaussian mixture model was used to threshold PET scans into positive and negative labels. MRI data were fed into a convolutional neural network to generate imaging features. These features were combined in a logistic regression model with patient demographics, APOE gene status, cognitive scores, hippocampal volumes, and clinical diagnoses to classify each ATN biomarker component as positive or negative. Area under the receiver operating characteristic curve (AUC) analysis was used for model evaluation. Feature importance was derived from model coefficients and gradients. Results There were 2099 amyloid (mean patient age, 75 years ± 10 [SD]; 1110 male), 557 tau (mean patient age, 75 years ± 7; 280 male), and 2768 FDG PET (mean patient age, 75 years ± 7; 1645 male) and MRI pairs. Model AUCs for the test set were as follows: amyloid, 0.79 (95% CI: 0.74, 0.83); tau, 0.73 (95% CI: 0.58, 0.86); and neurodegeneration, 0.86 (95% CI: 0.83, 0.89). Within the networks, high gradients were present in key temporal, parietal, frontal, and occipital cortical regions. Model coefficients for cognitive scores, hippocampal volumes, and APOE status were highest. Conclusion A deep learning algorithm predicted each component of PET-determined ATN status with acceptable to excellent efficacy using MRI and other available diagnostic data. © RSNA, 2023 Supplemental material is available for this article.
- Published
- 2023
- Full Text
- View/download PDF
12. Duke Liver Dataset: A Publicly Available Liver MRI Dataset with Liver Segmentation Masks and Series Labels.
- Author
-
Macdonald JA, Zhu Z, Konkel B, Mazurowski MA, Wiggins WF, and Bashir MR
- Abstract
The Duke Liver Dataset contains 2146 abdominal MRI series from 105 patients, including a majority with cirrhotic features, and 310 image series with corresponding manually segmented liver masks., Competing Interests: Disclosures of conflicts of interest: J.A.M. No relevant relationships. Z.Z. No relevant relationships. B.K. No relevant relationships. M.A.M. No relevant relationships. W.F.W. Research funding to institution from The Marcus Foundation and the National Institutes of Health (grant no. R01-NS123275); consulting fees from Qure.ai to author; honorarium paid to author from University of Wisconsin–GE CT Protocols Partnership Medical Advisory Board. M.R.B. Grants or contracts from Siemens Healthineers, Madrigal Pharmaceuticals, NGM Biopharmaceuticals, Metacrine, Corcept, and Carmot Therapeutics (author was primary investigator on research grants to institution); associate editor for Radiology., (© 2023 by the Radiological Society of North America, Inc.)
- Published
- 2023
- Full Text
- View/download PDF
13. Multistep Automated Data Labelling Procedure (MADLaP) for thyroid nodules on ultrasound: An artificial intelligence approach for automating image annotation.
- Author
-
Zhang J, Mazurowski MA, Allen BC, and Wildman-Tobriner B
- Subjects
- Humans, Artificial Intelligence, Data Curation, Ultrasonography methods, Neural Networks, Computer, Thyroid Nodule diagnostic imaging, Thyroid Nodule pathology
- Abstract
Machine learning (ML) for diagnosis of thyroid nodules on ultrasound is an active area of research. However, ML tools require large, well-labeled datasets, the curation of which is time-consuming and labor-intensive. The purpose of our study was to develop and test a deep-learning-based tool to facilitate and automate the data annotation process for thyroid nodules; we named our tool Multistep Automated Data Labelling Procedure (MADLaP). MADLaP was designed to take multiple inputs including pathology reports, ultrasound images, and radiology reports. Using multiple step-wise 'modules' including rule-based natural language processing, deep-learning-based imaging segmentation, and optical character recognition, MADLaP automatically identified images of a specific thyroid nodule and correctly assigned a pathology label. The model was developed using a training set of 378 patients across our health system and tested on a separate set of 93 patients. Ground truths for both sets were selected by an experienced radiologist. Performance metrics including yield (how many labeled images the model produced) and accuracy (percentage correct) were measured using the test set. MADLaP achieved a yield of 63 % and an accuracy of 83 %. The yield progressively increased as the input data moved through each module, while accuracy peaked part way through. Error analysis showed that inputs from certain examination sites had lower accuracy (40 %) than the other sites (90 %, 100 %). MADLaP successfully created curated datasets of labeled ultrasound images of thyroid nodules. While accurate, the relatively suboptimal yield of MADLaP exposed some challenges when trying to automatically label radiology images from heterogeneous sources. The complex task of image curation and annotation could be automated, allowing for enrichment of larger datasets for use in machine learning development., Competing Interests: Declaration of competing interest None., (Copyright © 2023 Elsevier B.V. All rights reserved.)
- Published
- 2023
- Full Text
- View/download PDF
14. Deep Learning for Breast MRI Style Transfer with Limited Training Data.
- Author
-
Cao S, Konz N, Duncan J, and Mazurowski MA
- Subjects
- Humans, Magnetic Resonance Imaging, Radiography, Image Processing, Computer-Assisted methods, Deep Learning
- Abstract
In this work we introduce a novel medical image style transfer method, StyleMapper, that can transfer medical scans to an unseen style with access to limited training data. This is made possible by training our model on unlimited possibilities of simulated random medical imaging styles on the training set, making our work more computationally efficient when compared with other style transfer methods. Moreover, our method enables arbitrary style transfer: transferring images to styles unseen in training. This is useful for medical imaging, where images are acquired using different protocols and different scanner models, resulting in a variety of styles that data may need to be transferred between. Our model disentangles image content from style and can modify an image's style by simply replacing the style encoding with one extracted from a single image of the target style, with no additional optimization required. This also allows the model to distinguish between different styles of images, including among those that were unseen in training. We propose a formal description of the proposed model. Experimental results on breast magnetic resonance images indicate the effectiveness of our method for style transfer. Our style transfer method allows for the alignment of medical images taken with different scanners into a single unified style dataset, allowing for the training of other downstream tasks on such a dataset for tasks such as classification, object detection and others., (© 2022. The Author(s) under exclusive licence to Society for Imaging Informatics in Medicine.)
- Published
- 2023
- Full Text
- View/download PDF
15. A Competition, Benchmark, Code, and Data for Using Artificial Intelligence to Detect Lesions in Digital Breast Tomosynthesis.
- Author
-
Konz N, Buda M, Gu H, Saha A, Yang J, Chledowski J, Park J, Witowski J, Geras KJ, Shoshan Y, Gilboa-Solomon F, Khapun D, Ratner V, Barkan E, Ozery-Flato M, Martí R, Omigbodun A, Marasinou C, Nakhaei N, Hsu W, Sahu P, Hossain MB, Lee J, Santos C, Przelaskowski A, Kalpathy-Cramer J, Bearce B, Cha K, Farahani K, Petrick N, Hadjiiski L, Drukker K, Armato SG 3rd, and Mazurowski MA
- Subjects
- Humans, Female, Benchmarking, Mammography methods, Algorithms, Radiographic Image Interpretation, Computer-Assisted methods, Artificial Intelligence, Breast Neoplasms diagnostic imaging
- Abstract
Importance: An accurate and robust artificial intelligence (AI) algorithm for detecting cancer in digital breast tomosynthesis (DBT) could significantly improve detection accuracy and reduce health care costs worldwide., Objectives: To make training and evaluation data for the development of AI algorithms for DBT analysis available, to develop well-defined benchmarks, and to create publicly available code for existing methods., Design, Setting, and Participants: This diagnostic study is based on a multi-institutional international grand challenge in which research teams developed algorithms to detect lesions in DBT. A data set of 22 032 reconstructed DBT volumes was made available to research teams. Phase 1, in which teams were provided 700 scans from the training set, 120 from the validation set, and 180 from the test set, took place from December 2020 to January 2021, and phase 2, in which teams were given the full data set, took place from May to July 2021., Main Outcomes and Measures: The overall performance was evaluated by mean sensitivity for biopsied lesions using only DBT volumes with biopsied lesions; ties were broken by including all DBT volumes., Results: A total of 8 teams participated in the challenge. The team with the highest mean sensitivity for biopsied lesions was the NYU B-Team, with 0.957 (95% CI, 0.924-0.984), and the second-place team, ZeDuS, had a mean sensitivity of 0.926 (95% CI, 0.881-0.964). When the results were aggregated, the mean sensitivity for all submitted algorithms was 0.879; for only those who participated in phase 2, it was 0.926., Conclusions and Relevance: In this diagnostic study, an international competition produced algorithms with high sensitivity for using AI to detect lesions on DBT images. A standardized performance benchmark for the detection task using publicly available clinical imaging data was released, with detailed descriptions and analyses of submitted algorithms accompanied by a public release of their predictions and code for selected methods. These resources will serve as a foundation for future research on computer-assisted diagnosis methods for DBT, significantly lowering the barrier of entry for new researchers.
- Published
- 2023
- Full Text
- View/download PDF
16. The Need for Targeted Labeling of Machine Learning-Based Software as a Medical Device.
- Author
-
Goldstein BA, Mazurowski MA, and Li C
- Subjects
- Humans, Software, Machine Learning
- Published
- 2022
- Full Text
- View/download PDF
17. Anomaly Detection of Calcifications in Mammography Based on 11,000 Negative Cases.
- Author
-
Hou R, Peng Y, Grimm LJ, Ren Y, Mazurowski MA, Marks JR, King LM, Maley CC, Hwang ES, and Lo JY
- Subjects
- Diagnosis, Computer-Assisted, Female, Humans, Machine Learning, Mammography methods, Breast Neoplasms diagnostic imaging, Calcinosis diagnostic imaging
- Abstract
In mammography, calcifications are one of the most common signs of breast cancer. Detection of such lesions is an active area of research for computer-aided diagnosis and machine learning algorithms. Due to limited numbers of positive cases, many supervised detection models suffer from overfitting and fail to generalize. We present a one-class, semi-supervised framework using a deep convolutional autoencoder trained with over 50,000 images from 11,000 negative-only cases. Since the model learned from only normal breast parenchymal features, calcifications produced large signals when comparing the residuals between input and reconstruction output images. As a key advancement, a structural dissimilarity index was used to suppress non-structural noises. Our selected model achieved pixel-based AUROC of 0.959 and AUPRC of 0.676 during validation, where calcification masks were defined in a semi-automated process. Although not trained directly on any cancers, detection performance of calcification lesions on 1,883 testing images (645 malignant and 1238 negative) achieved 75% sensitivity at 2.5 false positives per image. Performance plateaued early when trained with only a fraction of the cases, and greater model complexity or a larger dataset did not improve performance. This study demonstrates the potential of this anomaly detection approach to detect mammographic calcifications in a semi-supervised manner with efficient use of a small number of labeled images, and may facilitate new clinical applications such as computer-aided triage and quality improvement.
- Published
- 2022
- Full Text
- View/download PDF
18. Multi-label annotation of text reports from computed tomography of the chest, abdomen, and pelvis using deep learning.
- Author
-
D'Anniballe VM, Tushar FI, Faryna K, Han S, Mazurowski MA, Rubin GD, and Lo JY
- Subjects
- Abdomen, Humans, Neural Networks, Computer, Pelvis diagnostic imaging, Tomography, X-Ray Computed, Deep Learning
- Abstract
Background: There is progress to be made in building artificially intelligent systems to detect abnormalities that are not only accurate but can handle the true breadth of findings that radiologists encounter in body (chest, abdomen, and pelvis) computed tomography (CT). Currently, the major bottleneck for developing multi-disease classifiers is a lack of manually annotated data. The purpose of this work was to develop high throughput multi-label annotators for body CT reports that can be applied across a variety of abnormalities, organs, and disease states thereby mitigating the need for human annotation., Methods: We used a dictionary approach to develop rule-based algorithms (RBA) for extraction of disease labels from radiology text reports. We targeted three organ systems (lungs/pleura, liver/gallbladder, kidneys/ureters) with four diseases per system based on their prevalence in our dataset. To expand the algorithms beyond pre-defined keywords, attention-guided recurrent neural networks (RNN) were trained using the RBA-extracted labels to classify reports as being positive for one or more diseases or normal for each organ system. Alternative effects on disease classification performance were evaluated using random initialization or pre-trained embedding as well as different sizes of training datasets. The RBA was tested on a subset of 2158 manually labeled reports and performance was reported as accuracy and F-score. The RNN was tested against a test set of 48,758 reports labeled by RBA and performance was reported as area under the receiver operating characteristic curve (AUC), with 95% CIs calculated using the DeLong method., Results: Manual validation of the RBA confirmed 91-99% accuracy across the 15 different labels. Our models extracted disease labels from 261,229 radiology reports of 112,501 unique subjects. Pre-trained models outperformed random initialization across all diseases. As the training dataset size was reduced, performance was robust except for a few diseases with a relatively small number of cases. Pre-trained classification AUCs reached > 0.95 for all four disease outcomes and normality across all three organ systems., Conclusions: Our label-extracting pipeline was able to encompass a variety of cases and diseases in body CT reports by generalizing beyond strict rules with exceptional accuracy. The method described can be easily adapted to enable automated labeling of hospital-scale medical data sets for training image-based disease classifiers., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
19. Prediction of Upstaging in Ductal Carcinoma in Situ Based on Mammographic Radiomic Features.
- Author
-
Hou R, Grimm LJ, Mazurowski MA, Marks JR, King LM, Maley CC, Lynch T, van Oirsouw M, Rogers K, Stone N, Wallis M, Teuwen J, Wesseling J, Hwang ES, and Lo JY
- Subjects
- Adult, Aged, Aged, 80 and over, Female, Humans, Male, Mammography, Middle Aged, Retrospective Studies, Breast Neoplasms diagnostic imaging, Calcinosis, Carcinoma in Situ, Carcinoma, Ductal, Breast pathology, Carcinoma, Intraductal, Noninfiltrating diagnostic imaging, Carcinoma, Intraductal, Noninfiltrating pathology
- Abstract
Background Improving diagnosis of ductal carcinoma in situ (DCIS) before surgery is important in choosing optimal patient management strategies. However, patients may harbor occult invasive disease not detected until definitive surgery. Purpose To assess the performance and clinical utility of mammographic radiomic features in the prediction of occult invasive cancer among women diagnosed with DCIS on the basis of core biopsy findings. Materials and Methods In this Health Insurance Portability and Accountability Act-compliant retrospective study, digital magnification mammographic images were collected from women who underwent breast core-needle biopsy for calcifications that was performed at a single institution between September 2008 and April 2017 and yielded a diagnosis of DCIS. The database query was directed at asymptomatic women with calcifications without a mass, architectural distortion, asymmetric density, or palpable disease. Logistic regression with regularization was used. Differences across training and internal test set by upstaging rate, age, lesion size, and estrogen and progesterone receptor status were assessed by using the Kruskal-Wallis or χ
2 test. Results The study consisted of 700 women with DCIS (age range, 40-89 years; mean age, 59 years ± 10 [standard deviation]), including 114 with lesions (16.3%) upstaged to invasive cancer at subsequent surgery. The sample was split randomly into 400 women for the training set and 300 for the testing set (mean ages: training set, 59 years ± 10; test set, 59 years ± 10; P = .85). A total of 109 radiomic and four clinical features were extracted. The best model on the test set by using all radiomic and clinical features helped predict upstaging with an area under the receiver operating characteristic curve of 0.71 (95% CI: 0.62, 0.79). For a fixed high sensitivity (90%), the model yielded a specificity of 22%, a negative predictive value of 92%, and an odds ratio of 2.4 (95% CI: 1.8, 3.2). High specificity (90%) corresponded to a sensitivity of 37%, positive predictive value of 41%, and odds ratio of 5.0 (95% CI: 2.8, 9.0). Conclusion Machine learning models that use radiomic features applied to mammographic calcifications may help predict upstaging of ductal carcinoma in situ, which can refine clinical decision making and treatment planning. © RSNA, 2022.- Published
- 2022
- Full Text
- View/download PDF
20. 3D Pyramid Pooling Network for Abdominal MRI Series Classification.
- Author
-
Zhu Z, Mittendorf A, Shropshire E, Allen B, Miller C, Bashir MR, and Mazurowski MA
- Subjects
- Liver, Neural Networks, Computer, Algorithms, Magnetic Resonance Imaging methods
- Abstract
Recognizing and organizing different series in an MRI examination is important both for clinical review and research, but it is poorly addressed by the current generation of picture archiving and communication systems (PACSs) and post-processing workstations. In this paper, we study the problem of using deep convolutional neural networks for automatic classification of abdominal MRI series to one of many series types. Our contributions are three-fold. First, we created a large abdominal MRI dataset containing 3717 MRI series including 188,665 individual images, derived from liver examinations. 30 different series types are represented in this dataset. The dataset was annotated by consensus readings from two radiologists. Both the MRIs and the annotations were made publicly available. Second, we proposed a 3D pyramid pooling network, which can elegantly handle abdominal MRI series with varied sizes of each dimension, and achieved state-of-the-art classification performance. Third, we performed the first ever comparison between the algorithm and the radiologists on an additional dataset and had several meaningful findings.
- Published
- 2022
- Full Text
- View/download PDF
21. Classification of Multiple Diseases on Body CT Scans Using Weakly Supervised Deep Learning.
- Author
-
Tushar FI, D'Anniballe VM, Hou R, Mazurowski MA, Fu W, Samei E, Rubin GD, and Lo JY
- Abstract
Purpose: To design multidisease classifiers for body CT scans for three different organ systems using automatically extracted labels from radiology text reports., Materials and Methods: This retrospective study included a total of 12 092 patients (mean age, 57 years ± 18 [standard deviation]; 6172 women) for model development and testing. Rule-based algorithms were used to extract 19 225 disease labels from 13 667 body CT scans performed between 2012 and 2017. Using a three-dimensional DenseVNet, three organ systems were segmented: lungs and pleura, liver and gallbladder, and kidneys and ureters. For each organ system, a three-dimensional convolutional neural network classified each as no apparent disease or for the presence of four common diseases, for a total of 15 different labels across all three models. Testing was performed on a subset of 2158 CT volumes relative to 2875 manually derived reference labels from 2133 patients (mean age, 58 years ± 18; 1079 women). Performance was reported as area under the receiver operating characteristic curve (AUC), with 95% CIs calculated using the DeLong method., Results: Manual validation of the extracted labels confirmed 91%-99% accuracy across the 15 different labels. AUCs for lungs and pleura labels were as follows: atelectasis, 0.77 (95% CI: 0.74, 0.81); nodule, 0.65 (95% CI: 0.61, 0.69); emphysema, 0.89 (95% CI: 0.86, 0.92); effusion, 0.97 (95% CI: 0.96, 0.98); and no apparent disease, 0.89 (95% CI: 0.87, 0.91). AUCs for liver and gallbladder were as follows: hepatobiliary calcification, 0.62 (95% CI: 0.56, 0.67); lesion, 0.73 (95% CI: 0.69, 0.77); dilation, 0.87 (95% CI: 0.84, 0.90); fatty, 0.89 (95% CI: 0.86, 0.92); and no apparent disease, 0.82 (95% CI: 0.78, 0.85). AUCs for kidneys and ureters were as follows: stone, 0.83 (95% CI: 0.79, 0.87); atrophy, 0.92 (95% CI: 0.89, 0.94); lesion, 0.68 (95% CI: 0.64, 0.72); cyst, 0.70 (95% CI: 0.66, 0.73); and no apparent disease, 0.79 (95% CI: 0.75, 0.83)., Conclusion: Weakly supervised deep learning models were able to classify diverse diseases in multiple organ systems from CT scans. Keywords: CT, Diagnosis/Classification/Application Domain, Semisupervised Learning, Whole-Body Imaging© RSNA, 2022., Competing Interests: Disclosures of Conflicts of Interest: F.I.T. No relevant relationships. V.M.D. No relevant relationships. R.H. No relevant relationships. M.A.M. No relevant relationships. W.F. No relevant relationships. E.S. No relevant relationships. G.D.R. No relevant relationships. J.Y.L. MAIA Erasmus and University of Girona fellowship covered part of F.I.T.'s graduate stipend while he was a visiting scholar; NVIDIA GPU card given to the laboratory., (2022 by the Radiological Society of North America, Inc.)
- Published
- 2021
- Full Text
- View/download PDF
22. A Data Set and Deep Learning Algorithm for the Detection of Masses and Architectural Distortions in Digital Breast Tomosynthesis Images.
- Author
-
Buda M, Saha A, Walsh R, Ghate S, Li N, Swiecicki A, Lo JY, and Mazurowski MA
- Subjects
- Aged, Breast diagnostic imaging, False Positive Reactions, Female, Humans, Middle Aged, ROC Curve, Reproducibility of Results, Breast Neoplasms diagnosis, Datasets as Topic, Deep Learning, Early Detection of Cancer methods, Mammography
- Abstract
Importance: Breast cancer screening is among the most common radiological tasks, with more than 39 million examinations performed each year. While it has been among the most studied medical imaging applications of artificial intelligence, the development and evaluation of algorithms are hindered by the lack of well-annotated, large-scale publicly available data sets., Objectives: To curate, annotate, and make publicly available a large-scale data set of digital breast tomosynthesis (DBT) images to facilitate the development and evaluation of artificial intelligence algorithms for breast cancer screening; to develop a baseline deep learning model for breast cancer detection; and to test this model using the data set to serve as a baseline for future research., Design, Setting, and Participants: In this diagnostic study, 16 802 DBT examinations with at least 1 reconstruction view available, performed between August 26, 2014, and January 29, 2018, were obtained from Duke Health System and analyzed. From the initial cohort, examinations were divided into 4 groups and split into training and test sets for the development and evaluation of a deep learning model. Images with foreign objects or spot compression views were excluded. Data analysis was conducted from January 2018 to October 2020., Exposures: Screening DBT., Main Outcomes and Measures: The detection algorithm was evaluated with breast-based free-response receiver operating characteristic curve and sensitivity at 2 false positives per volume., Results: The curated data set contained 22 032 reconstructed DBT volumes that belonged to 5610 studies from 5060 patients with a mean (SD) age of 55 (11) years and 5059 (100.0%) women. This included 4 groups of studies: (1) 5129 (91.4%) normal studies; (2) 280 (5.0%) actionable studies, for which where additional imaging was needed but no biopsy was performed; (3) 112 (2.0%) benign biopsied studies; and (4) 89 studies (1.6%) with cancer. Our data set included masses and architectural distortions that were annotated by 2 experienced radiologists. Our deep learning model reached breast-based sensitivity of 65% (39 of 60; 95% CI, 56%-74%) at 2 false positives per DBT volume on a test set of 460 examinations from 418 patients., Conclusions and Relevance: The large, diverse, and curated data set presented in this study could facilitate the development and evaluation of artificial intelligence algorithms for breast cancer screening by providing data for training as well as a common set of cases for model validation. The performance of the model developed in this study showed that the task remains challenging; its performance could serve as a baseline for future model development.
- Published
- 2021
- Full Text
- View/download PDF
23. A generative adversarial network-based abnormality detection using only normal images for model training with application to digital breast tomosynthesis.
- Author
-
Swiecicki A, Konz N, Buda M, and Mazurowski MA
- Subjects
- Female, Humans, Middle Aged, Radiographic Image Enhancement methods, Radiographic Image Interpretation, Computer-Assisted methods, Breast diagnostic imaging, Breast Neoplasms diagnostic imaging, Computer Simulation, Mammography methods, Neural Networks, Computer
- Abstract
Deep learning has shown tremendous potential in the task of object detection in images. However, a common challenge with this task is when only a limited number of images containing the object of interest are available. This is a particular issue in cancer screening, such as digital breast tomosynthesis (DBT), where less than 1% of cases contain cancer. In this study, we propose a method to train an inpainting generative adversarial network to be used for cancer detection using only images that do not contain cancer. During inference, we removed a part of the image and used the network to complete the removed part. A significant error in completing an image part was considered an indication that such location is unexpected and thus abnormal. A large dataset of DBT images used in this study was collected at Duke University. It consisted of 19,230 reconstructed volumes from 4348 patients. Cancerous masses and architectural distortions were marked with bounding boxes by radiologists. Our experiments showed that the locations containing cancer were associated with a notably higher completion error than the non-cancer locations (mean error ratio of 2.77). All data used in this study has been made publicly available by the authors.
- Published
- 2021
- Full Text
- View/download PDF
24. Do We Expect More from Radiology AI than from Radiologists?
- Author
-
Mazurowski MA
- Abstract
The expectations of radiology artificial intelligence do not match expectations of radiologists in terms of performance and explainability., Competing Interests: Disclosures of Conflicts of Interest: M.A.M. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: disclosed no relevant relationships. Other relationships: institution has U.S. patent application 15/209,212 (systems and methods for extracting prognostic image features)., (2021 by the Radiological Society of North America, Inc.)
- Published
- 2021
- Full Text
- View/download PDF
25. Prediction of Upstaged Ductal Carcinoma In Situ Using Forced Labeling and Domain Adaptation.
- Author
-
Hou R, Mazurowski MA, Grimm LJ, Marks JR, King LM, Maley CC, Hwang ES, and Lo JY
- Subjects
- Breast diagnostic imaging, Female, Humans, Mammography, ROC Curve, Retrospective Studies, Breast Neoplasms diagnostic imaging, Carcinoma, Intraductal, Noninfiltrating diagnostic imaging
- Abstract
Objective: The goal of this study is to use adjunctive classes to improve a predictive model whose performance is limited by the common problems of small numbers of primary cases, high feature dimensionality, and poor class separability. Specifically, our clinical task is to use mammographic features to predict whether ductal carcinoma in situ (DCIS) identified at needle core biopsy will be later upstaged or shown to contain invasive breast cancer., Methods: To improve the prediction of pure DCIS (negative) versus upstaged DCIS (positive) cases, this study considers the adjunctive roles of two related classes: atypical ductal hyperplasia (ADH), a non-cancer type of breast abnormity, and invasive ductal carcinoma (IDC), with 113 computer vision based mammographic features extracted from each case. To improve the baseline Model A's classification of pure vs. upstaged DCIS, we designed three different strategies (Models B, C, D) with different ways of embedding features or inputs., Results: Based on ROC analysis, the baseline Model A performed with AUC of 0.614 (95% CI, 0.496-0.733). All three new models performed better than the baseline, with domain adaptation (Model D) performing the best with an AUC of 0.697 (95% CI, 0.595-0.797)., Conclusion: We improved the prediction performance of DCIS upstaging by embedding two related pathology classes in different training phases., Significance: The three new strategies of embedding related class data all outperformed the baseline model, thus demonstrating not only feature similarities among these different classes, but also the potential for improving classification by using other related classes.
- Published
- 2020
- Full Text
- View/download PDF
26. Deep Radiogenomics of Lower-Grade Gliomas: Convolutional Neural Networks Predict Tumor Genomic Subtypes Using MR Images.
- Author
-
Buda M, AlBadawy EA, Saha A, and Mazurowski MA
- Abstract
Purpose: To employ deep learning to predict genomic subtypes of lower-grade glioma (LLG) tumors based on their appearance at MRI., Materials and Methods: Imaging data from The Cancer Imaging Archive and genomic data from The Cancer Genome Atlas from 110 patients from five institutions with lower-grade gliomas (World Health Organization grade II and III) were used in this study. A convolutional neural network was trained to predict tumor genomic subtype based on the MRI of the tumor. Two different deep learning approaches were tested: training from random initialization and transfer learning. Deep learning models were pretrained on glioblastoma MRI, instead of natural images, to determine if performance was improved for the detection of LGGs. The models were evaluated using area under the receiver operating characteristic curve (AUC) with cross-validation. Imaging data and annotations used in this study are publicly available., Results: The best performing model was based on transfer learning from glioblastoma MRI. It achieved AUC of 0.730 (95% confidence interval [CI]: 0.605, 0.844) for discriminating cluster-of-clusters 2 from others. For the same task, a network trained from scratch achieved an AUC of 0.680 (95% CI: 0.538, 0.811), whereas a model pretrained on natural images achieved an AUC of 0.640 (95% CI: 0.521, 0.763)., Conclusion: These findings show the potential of utilizing deep learning to identify relationships between cancer imaging and cancer genomics in LGGs. However, more accurate models are needed to justify clinical use of such tools, which might be obtained using substantially larger training datasets. Supplemental material is available for this article. © RSNA, 2020., Competing Interests: Disclosures of Conflicts of Interest: M.B. disclosed no relevant relationships. E.A.A. disclosed no relevant relationships. A.S. disclosed no relevant relationships. M.A.M. Activities related to the present article: advisory role with Gradient Health. Activities not related to the present article: institution has received grant money from Bracco Diagnostics. Other relationships: disclosed no relevant relationships., (2020 by the Radiological Society of North America, Inc.)
- Published
- 2020
- Full Text
- View/download PDF
27. Breast Cancer Radiogenomics: Current Status and Future Directions.
- Author
-
Grimm LJ and Mazurowski MA
- Subjects
- Humans, Magnetic Resonance Imaging, Breast Neoplasms diagnostic imaging, Breast Neoplasms genetics, Genomics
- Abstract
Radiogenomics is an area of research that aims to identify associations between imaging phenotypes ("radio-") and tumor genome ("-genomics"). Breast cancer radiogenomics research in particular has been an especially prolific area of investigation in recent years as evidenced by the wide number and variety of publications and conferences presentations. To date, research has primarily been focused on dynamic contrast enhanced pre-operative breast MRI and breast cancer molecular subtypes, but investigations have extended to all breast imaging modalities as well as multiple additional genetic markers including those that are commercially available. Furthermore, both human and computer-extracted features as well as deep learning techniques have been explored. This review will summarize the specific imaging modalities used in radiogenomics analysis, describe the methods of extracting imaging features, and present the types of genomics, molecular, and related information used for analysis. Finally, the limitations and future directions of breast cancer radiogenomics research will be discussed., (Copyright © 2019 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.)
- Published
- 2020
- Full Text
- View/download PDF
28. Artificial Intelligence in Radiology: Some Ethical Considerations for Radiologists and Algorithm Developers.
- Author
-
Mazurowski MA
- Subjects
- Algorithms, Forecasting, Humans, Radiologists, Artificial Intelligence, Radiology
- Abstract
As artificial intelligence (AI) is finding its place in radiology, it is important to consider how to guide the research and clinical implementation in a way that will be most beneficial to patients. Although there are multiple aspects of this issue, I consider a specific one: a potential misalignment of the self-interests of radiologists and AI developers with the best interests of the patients. Radiologists know that supporting research into AI and advocating for its adoption in clinical settings could diminish their employment opportunities and reduce respect for their profession. This provides an incentive to oppose AI in various ways. AI developers have an incentive to hype their discoveries to gain attention. This could provide short-term personal gains, however, it could also create a distrust toward the field if it became apparent that the state of the art was far from where it was promised to be. The future research and clinical implementation of AI in radiology will be partially determined by radiologist and AI researchers. Therefore, it is very important that we recognize our own personal motivations and biases and act responsibly to ensure the highest benefit of the AI transformation to the patients., (Copyright © 2019. Published by Elsevier Inc.)
- Published
- 2020
- Full Text
- View/download PDF
29. Hierarchical Convolutional Neural Networks for Segmentation of Breast Tumors in MRI With Application to Radiogenomics.
- Author
-
Zhang J, Saha A, Zhu Z, and Mazurowski MA
- Subjects
- Breast Neoplasms classification, Breast Neoplasms genetics, Female, Genomics, Humans, Breast Neoplasms diagnostic imaging, Image Interpretation, Computer-Assisted methods, Magnetic Resonance Imaging methods, Neural Networks, Computer
- Abstract
Breast tumor segmentation based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a challenging problem and an active area of research. Particular challenges, similarly as in other segmentation problems, include the class-imbalance problem as well as confounding background in DCE-MR images. To address these issues, we propose a mask-guided hierarchical learning (MHL) framework for breast tumor segmentation via fully convolutional networks (FCN). Specifically, we first develop an FCN model to generate a 3D breast mask as the region of interest (ROI) for each image, to remove confounding information from input DCE-MR images. We then design a two-stage FCN model to perform coarse-to-fine segmentation for breast tumors. Particularly, we propose a Dice-Sensitivity-like loss function and a reinforcement sampling strategy to handle the class-imbalance problem. To precisely identify locations of tumors that underwent a biopsy, we further propose an FCN model to detect two landmarks located at two nipples. We finally selected the biopsied tumor based on both identified landmarks and segmentations. We validate our MHL method on 272 patients, achieving a mean Dice similarity coefficient (DSC) of 0.72 which is comparable to mutual DSC between expert radiologists. Using the segmented biopsied tumors, we also demonstrate that the automatically generated masks can be applied to radiogenomics and can identify luminal A subtype from other molecular subtypes with the similar accuracy with the analysis based on semi-manual tumor segmentation.
- Published
- 2019
- Full Text
- View/download PDF
30. Relationship between Background Parenchymal Enhancement on High-risk Screening MRI and Future Breast Cancer Risk.
- Author
-
Grimm LJ, Saha A, Ghate SV, Kim C, Soo MS, Yoon SC, and Mazurowski MA
- Subjects
- Adult, Aged, Breast Neoplasms diagnostic imaging, Cohort Studies, Early Detection of Cancer, Female, Humans, Magnetic Resonance Imaging, Middle Aged, North Carolina epidemiology, Retrospective Studies, Risk Factors, Young Adult, Breast diagnostic imaging, Breast Neoplasms epidemiology, Carcinoma, Ductal, Breast epidemiology, Carcinoma, Intraductal, Noninfiltrating epidemiology, Carcinoma, Lobular epidemiology, Parenchymal Tissue diagnostic imaging
- Abstract
Rationale and Objectives: To determine if background parenchymal enhancement (BPE) on screening breast magnetic resonance imaging (MRI) in high-risk women correlates with future cancer., Materials and Methods: All screening breast MRIs (n = 1039) in high-risk women at our institution from August 1, 2004, to July 30, 2013, were identified. Sixty-one patients who subsequently developed breast cancer were matched 1:2 by age and high-risk indication with patients who did not develop breast cancer (n = 122). Five fellowship-trained breast radiologists independently recorded the BPE. The median reader BPE for each case was calculated and compared between the cancer and control cohorts., Results: Cancer cohort patients were high-risk because of a history of radiation therapy (10%, 6 of 61), high-risk lesion (18%, 11 of 61), or breast cancer (30%, 18 of 61); BRCA mutation (18%, 11 of 61); or family history (25%, 15 of 61). Subsequent malignancies were invasive ductal carcinoma (64%, 39 of 61), ductal carcinoma in situ (30%, 18 of 61) and invasive lobular carcinoma (7%, 4of 61). BPE was significantly higher in the cancer cohort than in the control cohort (P = 0.01). Women with mild, moderate, or marked BPE were 2.5 times more likely to develop breast cancer than women with minimal BPE (odds ratio = 2.5, 95% confidence interval: 1.3-4.8, P = .005). There was fair interreader agreement (κ = 0.39)., Conclusions: High-risk women with greater than minimal BPE at screening MRI have increased risk of future breast cancer., (Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.)
- Published
- 2019
- Full Text
- View/download PDF
31. A machine learning approach to radiogenomics of breast cancer: a study of 922 subjects and 529 DCE-MRI features.
- Author
-
Saha A, Harowicz MR, Grimm LJ, Kim CE, Ghate SV, Walsh R, and Mazurowski MA
- Subjects
- Adult, Aged, Aged, 80 and over, Area Under Curve, Biomarkers, Tumor metabolism, Breast Neoplasms genetics, Breast Neoplasms metabolism, Female, Genomics methods, Humans, Machine Learning, Middle Aged, Receptor, ErbB-2 metabolism, Receptors, Estrogen metabolism, Receptors, Progesterone metabolism, Young Adult, Biomarkers, Tumor genetics, Breast Neoplasms diagnostic imaging, Breast Neoplasms pathology, Magnetic Resonance Imaging methods
- Abstract
Background: Recent studies showed preliminary data on associations of MRI-based imaging phenotypes of breast tumours with breast cancer molecular, genomic, and related characteristics. In this study, we present a comprehensive analysis of this relationship., Methods: We analysed a set of 922 patients with invasive breast cancer and pre-operative MRI. The MRIs were analysed by a computer algorithm to extract 529 features of the tumour and the surrounding tissue. Machine-learning-based models based on the imaging features were trained using a portion of the data (461 patients) to predict the following molecular, genomic, and proliferation characteristics: tumour surrogate molecular subtype, oestrogen receptor, progesterone receptor and human epidermal growth factor status, as well as a tumour proliferation marker (Ki-67). Trained models were evaluated on the set of the remaining 461 patients., Results: Multivariate models were predictive of Luminal A subtype with AUC = 0.697 (95% CI: 0.647-0.746, p < .0001), triple negative breast cancer with AUC = 0.654 (95% CI: 0.589-0.727, p < .0001), ER status with AUC = 0.649 (95% CI: 0.591-0.705, p < .001), and PR status with AUC = 0.622 (95% CI: 0.569-0.674, p < .0001). Associations between individual features and subtypes we also found., Conclusions: There is a moderate association between tumour molecular biomarkers and algorithmically assessed imaging features.
- Published
- 2018
- Full Text
- View/download PDF
32. A study of association of Oncotype DX recurrence score with DCE-MRI characteristics using multivariate machine learning models.
- Author
-
Saha A, Harowicz MR, Wang W, and Mazurowski MA
- Subjects
- Adult, Aged, Aged, 80 and over, Breast Neoplasms pathology, Cohort Studies, Female, Humans, Middle Aged, Multivariate Analysis, Neoplasm Recurrence, Local, Prognosis, Risk Factors, Algorithms, Breast Neoplasms diagnostic imaging, Machine Learning, Magnetic Resonance Imaging methods
- Abstract
Purpose: To determine whether multivariate machine learning models of algorithmically assessed magnetic resonance imaging (MRI) features from breast cancer patients are associated with Oncotype DX (ODX) test recurrence scores., Methods: A set of 261 female patients with invasive breast cancer, pre-operative dynamic contrast enhanced magnetic resonance (DCE-MR) images and available ODX score at our institution was identified. A computer algorithm extracted a comprehensive set of 529 features from the DCE-MR images of these patients. The set of patients was divided into a training set and a test set. Using the training set we developed two machine learning-based models to discriminate (1) high ODX scores from intermediate and low ODX scores, and (2) high and intermediate ODX scores from low ODX scores. The performance of these models was evaluated on the independent test set., Results: High against low and intermediate ODX scores were predicted by the multivariate model with AUC 0.77 (95% CI 0.56-0.98, p < 0.003). Low against intermediate and high ODX score was predicted with AUC 0.51 (95% CI 0.41-0.61, p = 0.75)., Conclusion: A moderate association between imaging and ODX score was identified. The evaluated models currently do not warrant replacement of ODX with imaging alone.
- Published
- 2018
- Full Text
- View/download PDF
33. Effects of MRI scanner parameters on breast cancer radiomics.
- Author
-
Saha A, Yu X, Sahoo D, and Mazurowski MA
- Abstract
Purpose: To assess the impact of varying magnetic resonance imaging (MRI) scanner parameters on the extraction of algorithmic features in breast MRI radiomics studies., Methods: In this retrospective study, breast imaging data for 272 patients were analyzed with magnetic resonance (MR) images. From the MR images, we assembled and implemented 529 algorithmic features of breast tumors and fibrograndular tissue (FGT). We divided the features into 10 groups based on the type of data used for the feature extraction and the nature of the extracted information. Three scanner parameters were considered: scanner manufacturer, scanner magnetic field strength, and slice thickness. We assessed the impact of each of the scanner parameters on each of the feature by testing whether the feature values are systematically diverse for different values of these scanner parameters. A two-sample t-test has been used to establish whether the impact of a scanner parameter on values of a feature is significant and receiver operating characteristics have been used for to establish the extent of that effect., Results: On average, higher proportion (69% FGT versus 20% tumor) of FGT related features were affected by the three scanner parameters. Of all feature groups and scanner parameters, the feature group related to the variation in FGT enhancement was found to be the most sensitive to the scanner manufacturer (AUC = 0.81 ± 0.14)., Conclusions: Features involving calculations from FGT are particularly sensitive to the scanner parameters., Competing Interests: Disclosure: The authors have no conflict of interest to disclose.
- Published
- 2017
- Full Text
- View/download PDF
34. Can Occult Invasive Disease in Ductal Carcinoma In Situ Be Predicted Using Computer-extracted Mammographic Features?
- Author
-
Shi B, Grimm LJ, Mazurowski MA, Baker JA, Marks JR, King LM, Maley CC, Hwang ES, and Lo JY
- Subjects
- Algorithms, Area Under Curve, Breast Neoplasms pathology, Carcinoma, Ductal, Breast pathology, Carcinoma, Intraductal, Noninfiltrating pathology, Female, Humans, Middle Aged, Neoplasm Invasiveness, Neoplasm Staging, Neoplasms, Unknown Primary pathology, Observer Variation, Predictive Value of Tests, ROC Curve, Retrospective Studies, Breast Neoplasms diagnostic imaging, Carcinoma, Ductal, Breast diagnostic imaging, Carcinoma, Intraductal, Noninfiltrating diagnostic imaging, Image Interpretation, Computer-Assisted, Mammography, Neoplasms, Unknown Primary diagnostic imaging
- Abstract
Rationale and Objectives: This study aimed to determine whether mammographic features assessed by radiologists and using computer algorithms are prognostic of occult invasive disease for patients showing ductal carcinoma in situ (DCIS) only in core biopsy., Materials and Methods: In this retrospective study, we analyzed data from 99 subjects with DCIS (74 pure DCIS, 25 DCIS with occult invasion). We developed a computer-vision algorithm capable of extracting 113 features from magnification views in mammograms and combining these features to predict whether a DCIS case will be upstaged to invasive cancer at the time of definitive surgery. In comparison, we also built predictive models based on physician-interpreted features, which included histologic features extracted from biopsy reports and Breast Imaging Reporting and Data System-related mammographic features assessed by two radiologists. The generalization performance was assessed using leave-one-out cross validation with the receiver operating characteristic curve analysis., Results: Using the computer-extracted mammographic features, the multivariate classifier was able to distinguish DCIS with occult invasion from pure DCIS, with an area under the curve for receiver operating characteristic equal to 0.70 (95% confidence interval: 0.59-0.81). The physician-interpreted features including histologic features and Breast Imaging Reporting and Data System-related mammographic features assessed by two radiologists showed mixed results, and only one radiologist's subjective assessment was predictive, with an area under the curve for receiver operating characteristic equal to 0.68 (95% confidence interval: 0.57-0.81)., Conclusions: Predicting upstaging for DCIS based upon mammograms is challenging, and there exists significant interobserver variability among radiologists. However, the proposed computer-extracted mammographic features are promising for the prediction of occult invasion in DCIS., (Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.)
- Published
- 2017
- Full Text
- View/download PDF
35. Predictive Utility of Marketed Volumetric Software Tools in Subjects at Risk for Alzheimer Disease: Do Regions Outside the Hippocampus Matter?
- Author
-
Tanpitukpongse TP, Mazurowski MA, Ikhena J, and Petrella JR
- Subjects
- Aged, Aged, 80 and over, Atrophy diagnostic imaging, Atrophy pathology, Cognitive Dysfunction pathology, Disease Progression, Female, Humans, Logistic Models, Magnetic Resonance Imaging methods, Male, ROC Curve, Alzheimer Disease diagnostic imaging, Alzheimer Disease pathology, Hippocampus diagnostic imaging, Hippocampus pathology, Software
- Abstract
Background and Purpose: Alzheimer disease is a prevalent neurodegenerative disease. Computer assessment of brain atrophy patterns can help predict conversion to Alzheimer disease. Our aim was to assess the prognostic efficacy of individual-versus-combined regional volumetrics in 2 commercially available brain volumetric software packages for predicting conversion of patients with mild cognitive impairment to Alzheimer disease., Materials and Methods: Data were obtained through the Alzheimer's Disease Neuroimaging Initiative. One hundred ninety-two subjects (mean age, 74.8 years; 39% female) diagnosed with mild cognitive impairment at baseline were studied. All had T1-weighted MR imaging sequences at baseline and 3-year clinical follow-up. Analysis was performed with NeuroQuant and Neuroreader. Receiver operating characteristic curves assessing the prognostic efficacy of each software package were generated by using a univariable approach using individual regional brain volumes and 2 multivariable approaches (multiple regression and random forest), combining multiple volumes., Results: On univariable analysis of 11 NeuroQuant and 11 Neuroreader regional volumes, hippocampal volume had the highest area under the curve for both software packages (0.69, NeuroQuant; 0.68, Neuroreader) and was not significantly different ( P > .05) between packages. Multivariable analysis did not increase the area under the curve for either package (0.63, logistic regression; 0.60, random forest NeuroQuant; 0.65, logistic regression; 0.62, random forest Neuroreader)., Conclusions: Of the multiple regional volume measures available in FDA-cleared brain volumetric software packages, hippocampal volume remains the best single predictor of conversion of mild cognitive impairment to Alzheimer disease at 3-year follow-up. Combining volumetrics did not add additional prognostic efficacy. Therefore, future prognostic studies in mild cognitive impairment, combining such tools with demographic and other biomarker measures, are justified in using hippocampal volume as the only volumetric biomarker., (© 2017 by American Journal of Neuroradiology.)
- Published
- 2017
- Full Text
- View/download PDF
36. Modeling false positive error making patterns in radiology trainees for improved mammography education.
- Author
-
Zhang J, Silber JI, and Mazurowski MA
- Subjects
- Algorithms, Computational Biology, False Positive Reactions, Female, Humans, Machine Learning, ROC Curve, Ultrasonography, Breast Neoplasms diagnostic imaging, Mammography methods, Radiology education
- Abstract
Introduction: While mammography notably contributes to earlier detection of breast cancer, it has its limitations, including a large number of false positive exams. Improved radiology education could potentially contribute to alleviating this issue. Toward this goal, in this paper we propose an algorithm for modeling of false positive error making among radiology trainees. Identifying troublesome locations for the trainees could focus their training and in turn improve their performance., Methods: The algorithm proposed in this paper predicts locations that are likely to result in a false positive error for each trainee based on the previous annotations made by the trainee. The algorithm consists of three steps. First, the suspicious false positive locations are identified in mammograms by Difference of Gaussian filter and suspicious regions are segmented by computer vision-based segmentation algorithms. Second, 133 features are extracted for each suspicious region to describe its distinctive characteristics. Third, a random forest classifier is applied to predict the likelihood of the trainee making a false positive error using the extracted features. The random forest classifier is trained using previous annotations made by the trainee. We evaluated the algorithm using data from a reader study in which 3 experts and 10 trainees interpreted 100 mammographic cases., Results: The algorithm was able to identify locations where the trainee will commit a false positive error with accuracy higher than an algorithm that selects such locations randomly. Specifically, our algorithm found false positive locations with 40% accuracy when only 1 location was selected for all cases for each trainee and 12% accuracy when 10 locations were selected. The accuracies for randomly identified locations were both 0% for these two scenarios., Conclusions: In this first study on the topic, we were able to build computer models that were able to find locations for which a trainee will make a false positive error in images that were not previously seen by the trainee. Presenting the trainees with such locations rather than randomly selected ones may improve their educational outcomes., (Copyright © 2015 Elsevier Inc. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
37. Radiology resident mammography training: interpretation difficulty and error-making patterns.
- Author
-
Grimm LJ, Kuzmiak CM, Ghate SV, Yoon SC, and Mazurowski MA
- Subjects
- Female, Humans, Male, North Carolina, Observer Variation, Reproducibility of Results, Sensitivity and Specificity, Breast Neoplasms diagnostic imaging, Clinical Competence statistics & numerical data, Diagnostic Errors statistics & numerical data, Internship and Residency statistics & numerical data, Mammography statistics & numerical data, Radiology education
- Abstract
Rationale and Objectives: The purpose of this study was to better understand the concept of mammography difficulty and how it affects radiology resident performance., Materials and Methods: Seven radiology residents and three expert breast imagers reviewed 100 mammograms, consisting of bilateral medial lateral oblique and craniocaudal views, using a research workstation. The cases consisted of normal, benign, and malignant findings. Participants identified abnormalities and scored the difficulty and malignant potential for each case. Resident performance (sensitivity, specificity, and area under the receiver operating characteristic curve [AUC]) was calculated for self- and expert-assessed high and low difficulties., Results: For cases classified by self-assessed difficulty, the resident AUCs were 0.667 for high difficulty and 0.771 for low difficulty cases (P = .010). Resident sensitivities were 0.707 for high and 0.614 for low difficulty cases (P = .113). Resident specificities were 0.583 for high and 0.905 for low difficulty cases (P < .001). For cases classified by expert-assessed difficulty, the resident AUCs were 0.583 for high and 0.783 for low difficulty cases (P = .001). Resident sensitivities were 0.558 for high and 0.796 for low difficulty cases (P < .001). Resident specificities were 0.714 for high and 0.740 for low difficulty cases (P = .807)., Conclusions: Increased self- and expert-assessed difficulty is associated with a decrease in resident performance in mammography. However, while this lower performance is due to a decrease in specificity for self-assessed difficulty, it is due to a decrease in sensitivity for expert-assessed difficulty. These trends suggest that educators should provide a mix of self- and expert-assessed difficult cases in educational materials to maximize the effect of training on resident performance and confidence., (Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.)
- Published
- 2014
- Full Text
- View/download PDF
38. Predictors of an academic career on radiology residency applications.
- Author
-
Grimm LJ, Shapiro LM, Singhapricha T, Mazurowski MA, Desser TS, and Maxfield CM
- Subjects
- North Carolina, Radiology statistics & numerical data, Workforce, Academic Medical Centers statistics & numerical data, Career Choice, Faculty, Medical statistics & numerical data, Internship and Residency statistics & numerical data, Radiology education
- Abstract
Rationale and Objectives: To evaluate radiology residency applications to determine if any variables are predictive of a future academic radiology career., Materials and Methods: Application materials from 336 radiology residency graduates between 1993 and 2010 from the Department of Radiology, Duke University and between 1990 and 2010 from the Department of Radiology, Stanford University were retrospectively reviewed. The institutional review boards approved this Health Insurance Portability and Accountability Act-compliant study with a waiver of informed consent. Biographical (gender, age at application, advanced degrees, prior career), undergraduate school (school, degree, research experience, publications), and medical school (school, research experience, manuscript publications, Alpha Omega Alpha membership, clerkship grades, United States Medical Licensing Examination Step 1 and 2 scores, personal statement and letter of recommendation reference to academics, couples match status) data were recorded. Listing in the Association of American Medical Colleges Faculty Online Directory and postgraduation publications were used to determine academic status., Results: There were 72 (21%) radiologists in an academic career and 264 (79%) in a nonacademic career. Variables associated with an academic career were elite undergraduate school (P = .003), undergraduate school publications (P = .018), additional advanced degrees (P = .027), elite medical school (P = .006), a research year in medical school (P < .001), and medical school publications (P < .001). A multivariate cross-validation analysis showed that these variables are jointly predictive of an academic career (P < .001)., Conclusions: Undergraduate and medical school rankings and publications, as well as a medical school research year and an additional advanced degree, are associated with an academic career. Radiology residency selection committees should consider these factors in the context of the residency application if they wish to recruit future academic radiologists., (Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.)
- Published
- 2014
- Full Text
- View/download PDF
39. Imaging descriptors improve the predictive power of survival models for glioblastoma patients.
- Author
-
Mazurowski MA, Desjardins A, and Malof JM
- Subjects
- Brain Neoplasms therapy, Glioblastoma therapy, Humans, Predictive Value of Tests, ROC Curve, Survival Rate, Brain Neoplasms diagnosis, Brain Neoplasms mortality, Glioblastoma diagnosis, Glioblastoma mortality, Magnetic Resonance Imaging, Models, Statistical
- Abstract
Background: Because effective prediction of survival time can be highly beneficial for the treatment of glioblastoma patients, the relationship between survival time and multiple patient characteristics has been investigated. In this paper, we investigate whether the predictive power of a survival model based on clinical patient features improves when MRI features are also included in the model., Methods: The subjects in this study were 82 glioblastoma patients for whom clinical features as well as MR imaging exams were made available by The Cancer Genome Atlas (TCGA) and The Cancer Imaging Archive (TCIA). Twenty-six imaging features in the available MR scans were assessed by radiologists from the TCGA Glioma Phenotype Research Group. We used multivariate Cox proportional hazards regression to construct 2 survival models: one that used 3 clinical features (age, gender, and KPS) as the covariates and 1 that used both the imaging features and the clinical features as the covariates. Then, we used 2 measures to compare the predictive performance of these 2 models: area under the receiver operating characteristic curve for the 1-year survival threshold and overall concordance index. To eliminate any positive performance estimation bias, we used leave-one-out cross-validation., Results: The performance of the model based on both clinical and imaging features was higher than the performance of the model based on only the clinical features, in terms of both area under the receiver operating characteristic curve (P < .01) and the overall concordance index (P < .01)., Conclusions: Imaging features assessed using a controlled lexicon have additional predictive value compared with clinical features when predicting survival time in glioblastoma patients.
- Published
- 2013
- Full Text
- View/download PDF
40. Mutual information-based template matching scheme for detection of breast masses: from mammography to digital breast tomosynthesis.
- Author
-
Mazurowski MA, Lo JY, Harrawood BP, and Tourassi GD
- Subjects
- Algorithms, Breast Neoplasms diagnosis, Breast Neoplasms diagnostic imaging, Diagnosis, Computer-Assisted methods, Female, Humans, Pattern Recognition, Automated, Breast pathology, Mammography methods, Radiographic Image Interpretation, Computer-Assisted methods
- Abstract
Development of a computational decision aid for a new medical imaging modality typically is a long and complicated process. It consists of collecting data in the form of images and annotations, development of image processing and pattern recognition algorithms for analysis of the new images and finally testing of the resulting system. Since new imaging modalities are developed more rapidly than ever before, any effort for decreasing the time and cost of this development process could result in maximizing the benefit of the new imaging modality to patients by making the computer aids quickly available to radiologists that interpret the images. In this paper, we make a step in this direction and investigate the possibility of translating the knowledge about the detection problem from one imaging modality to another. Specifically, we present a computer-aided detection (CAD) system for mammographic masses that uses a mutual information-based template matching scheme with intelligently selected templates. We presented principles of template matching with mutual information for mammography before. In this paper, we present an implementation of those principles in a complete computer-aided detection system. The proposed system, through an automatic optimization process, chooses the most useful templates (mammographic regions of interest) using a large database of previously collected and annotated mammograms. Through this process, the knowledge about the task of detecting masses in mammograms is incorporated in the system. Then, we evaluate whether our system developed for screen-film mammograms can be successfully applied not only to other mammograms but also to digital breast tomosynthesis (DBT) reconstructed slices without adding any DBT cases for training. Our rationale is that since mutual information is known to be a robust inter-modality image similarity measure, it has high potential of transferring knowledge between modalities in the context of the mass detection task. Experimental evaluation of the system on mammograms showed competitive performance compared to other mammography CAD systems recently published in the literature. When the system was applied "as-is" to DBT, its performance was notably worse than that for mammograms. However, with a simple additional preprocessing step, the performance of the system reached levels similar to that obtained for mammograms. In conclusion, the presented CAD system not only performed competitively on screen-film mammograms but it also performed robustly on DBT showing that direct transfer of knowledge across breast imaging modalities for mass detection is in fact possible., (Copyright © 2011 Elsevier Inc. All rights reserved.)
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.