13 results on '"Summers RM"'
Search Results
2. Fifty years of SPIE Medical Imaging proceedings papers.
- Author
-
Nishikawa RM, Deserno TM, Madabhushi A, Krupinski EA, Summers RM, Hoeschen C, Mello-Thoms C, Myers KJ, Kupinski MA, and Siewerdsen JH
- Abstract
Purpose: To commemorate the 50th anniversary of the first SPIE Medical Imaging meeting, we highlight some of the important publications published in the conference proceedings. Approach: We determined the top cited and downloaded papers. We also asked members of the editorial board of the Journal of Medical Imaging to select their favorite papers. Results: There was very little overlap between the three methods of highlighting papers. The downloads were mostly recent papers, whereas the favorite papers were mostly older papers. Conclusions: The three different methods combined provide an overview of the highlights of the papers published in the SPIE Medical Imaging conference proceedings over the last 50 years., (© 2022 Society of Photo-Optical Instrumentation Engineers (SPIE).)
- Published
- 2022
- Full Text
- View/download PDF
3. SPIE Computer-Aided Diagnosis conference anniversary review.
- Author
-
Summers RM and Giger ML
- Abstract
The SPIE Computer-Aided Diagnosis conference has been held for 16 consecutive years at the annual SPIE Medical Imaging symposium. The conference remains vibrant, with a core group of submitters as well as new submitters and attendees each year. Recent developments include a marked shift in submissions relating to the artificial intelligence revolution in medical image analysis. This review describes the topics and trends observed in research presented at the Computer-Aided Diagnosis conference as part of the 50th-anniversary celebration of SPIE Medical Imaging., (© 2022 The Authors.)
- Published
- 2022
- Full Text
- View/download PDF
4. Erratum: Prostate cancer detection from multi-institution multiparametric MRIs using deep convolutional neural networks system (Erratum).
- Author
-
Sumathipala Y, Lay N, Turkbey B, Smith C, Choyke PL, and Summers RM
- Abstract
[This corrects the article DOI: 10.1117/1.JMI.5.4.044507.]., (© 2019 Society of Photo-Optical Instrumentation Engineers (SPIE).)
- Published
- 2019
- Full Text
- View/download PDF
5. Fully automated prostate whole gland and central gland segmentation on MRI using holistically nested networks with short connections.
- Author
-
Cheng R, Lay N, Roth HR, Turkbey B, Jin D, Gandler W, McCreedy ES, Pohida T, Pinto P, Choyke P, McAuliffe MJ, and Summers RM
- Abstract
Accurate and automated prostate whole gland and central gland segmentations on MR images are essential for aiding any prostate cancer diagnosis system. Our work presents a 2-D orthogonal deep learning method to automatically segment the whole prostate and central gland from T2-weighted axial-only MR images. The proposed method can generate high-density 3-D surfaces from low-resolution ( z axis) MR images. In the past, most methods have focused on axial images alone, e.g., 2-D based segmentation of the prostate from each 2-D slice. Those methods suffer the problems of over-segmenting or under-segmenting the prostate at apex and base, which adds a major contribution for errors. The proposed method leverages the orthogonal context to effectively reduce the apex and base segmentation ambiguities. It also overcomes jittering or stair-step surface artifacts when constructing a 3-D surface from 2-D segmentation or direct 3-D segmentation approaches, such as 3-D U-Net. The experimental results demonstrate that the proposed method achieves 92.4 % ± 3 % Dice similarity coefficient (DSC) for prostate and DSC of 90.1 % ± 4.6 % for central gland without trimming any ending contours at apex and base. The experiments illustrate the feasibility and robustness of the 2-D-based holistically nested networks with short connections method for MR prostate and central gland segmentation. The proposed method achieves segmentation results on par with the current literature.
- Published
- 2019
- Full Text
- View/download PDF
6. Prostate cancer detection from multi-institution multiparametric MRIs using deep convolutional neural networks.
- Author
-
Sumathipala Y, Lay N, Turkbey B, Smith C, Choyke PL, and Summers RM
- Abstract
Multiparametric magnetic resonance imaging (mpMRI) of the prostate aids in early diagnosis of prostate cancer, but is difficult to interpret and subject to interreader variability. Our objective is to generate probability maps, overlaid on original mpMRI images to help radiologists identify where a cancer is suspected as a computer-aided diagnostic (CAD). We optimized the holistically nested edge detection (HED) deep convolutional neural network. Our dataset contains T2, apparent diffusion coefficient, and high b -value images from 186 patients across six institutions worldwide: 92 with an endorectal coil (ERC) and 94 without. Ground-truth was based on tumor segmentations manually drawn by expert radiologists based on histologic evidence of cancer. The training set consisted of 120 patients and the validation set and test set included 19 and 47, respectively. Slice-level probability maps are evaluated at the lesion level of analysis. The best model: HED using 5 × 5 convolutional kernels, batch normalization, and optimized using Adam. This CAD performed significantly better ( p < 0.001 ) in the peripheral zone ( AUC = 0.94 ± 0.01 ) than the transition zone. It outperforms a previous CAD from our group in a head-to-head comparison on the same ERC-only test cases ( AUC = 0.97 ± 0.01 ; p < 0.001 ). Our CAD establishes a state-of-the-art performance for predicting prostate cancer lesions on mpMRIs.
- Published
- 2018
- Full Text
- View/download PDF
7. DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning.
- Author
-
Yan K, Wang X, Lu L, and Summers RM
- Abstract
Extracting, harvesting, and building large-scale annotated radiological image datasets is a greatly important yet challenging problem. Meanwhile, vast amounts of clinical annotations have been collected and stored in hospitals' picture archiving and communication systems (PACS). These types of annotations, also known as bookmarks in PACS, are usually marked by radiologists during their daily workflow to highlight significant image findings that may serve as reference for later studies. We propose to mine and harvest these abundant retrospective medical data to build a large-scale lesion image dataset. Our process is scalable and requires minimum manual annotation effort. We mine bookmarks in our institute to develop DeepLesion, a dataset with 32,735 lesions in 32,120 CT slices from 10,594 studies of 4,427 unique patients. There are a variety of lesion types in this dataset, such as lung nodules, liver tumors, enlarged lymph nodes, and so on. It has the potential to be used in various medical image applications. Using DeepLesion, we train a universal lesion detector that can find all types of lesions with one unified framework. In this challenging task, the proposed lesion detector achieves a sensitivity of 81.1% with five false positives per image.
- Published
- 2018
- Full Text
- View/download PDF
8. Holistic segmentation of the lung in cine MRI.
- Author
-
Kovacs W, Hsieh N, Roth H, Nnamdi-Emeratom C, Bandettini WP, Arai A, Mankodi A, Summers RM, and Yao J
- Abstract
Duchenne muscular dystrophy (DMD) is a childhood-onset neuromuscular disease that results in the degeneration of muscle, starting in the extremities, before progressing to more vital areas, such as the lungs. Respiratory failure and pneumonia due to respiratory muscle weakness lead to hospitalization and early mortality. However, tracking the disease in this region can be difficult, as current methods are based on breathing tests and are incapable of distinguishing between muscle involvements. Cine MRI scans give insight into respiratory muscle movements, but the images suffer due to low spatial resolution and poor signal-to-noise ratio. Thus, a robust lung segmentation method is required for accurate analysis of the lung and respiratory muscle movement. We deployed a deep learning approach that utilizes sequence-specific prior information to assist the segmentation of lung in cine MRI. More specifically, we adopt a holistically nested network to conduct image-to-image holistic training and prediction. One frame of the cine MRI is used in the training and applied to the remainder of the sequence ([Formula: see text] frames). We applied this method to cine MRIs of the lung in the axial, sagittal, and coronal planes. Characteristic lung motion patterns during the breathing cycle were then derived from the segmentations and used for diagnosis. Our data set consisted of 31 young boys, age [Formula: see text] years, 15 of whom suffered from DMD. The remaining 16 subjects were age-matched healthy volunteers. For validation, slices from inspiratory and expiratory cycles were manually segmented and compared with results obtained from our method. The Dice similarity coefficient for the deep learning-based method was [Formula: see text] for the sagittal view, [Formula: see text] for the axial view, and [Formula: see text] for the coronal view. The holistic neural network approach was compared with an approach using Demon's registration and showed superior performance. These results suggest that the deep learning-based method reliably and accurately segments the lung across the breathing cycle.
- Published
- 2017
- Full Text
- View/download PDF
9. Automatic magnetic resonance prostate segmentation by deep learning with holistically nested networks.
- Author
-
Cheng R, Roth HR, Lay N, Lu L, Turkbey B, Gandler W, McCreedy ES, Pohida T, Pinto PA, Choyke P, McAuliffe MJ, and Summers RM
- Abstract
Accurate automatic segmentation of the prostate in magnetic resonance images (MRI) is a challenging task due to the high variability of prostate anatomic structure. Artifacts such as noise and similar signal intensity of tissues around the prostate boundary inhibit traditional segmentation methods from achieving high accuracy. We investigate both patch-based and holistic (image-to-image) deep-learning methods for segmentation of the prostate. First, we introduce a patch-based convolutional network that aims to refine the prostate contour which provides an initialization. Second, we propose a method for end-to-end prostate segmentation by integrating holistically nested edge detection with fully convolutional networks. Holistically nested networks (HNN) automatically learn a hierarchical representation that can improve prostate boundary detection. Quantitative evaluation is performed on the MRI scans of 250 patients in fivefold cross-validation. The proposed enhanced HNN model achieves a mean ± standard deviation. A Dice similarity coefficient (DSC) of [Formula: see text] and a mean Jaccard similarity coefficient (IoU) of [Formula: see text] are used to calculate without trimming any end slices. The proposed holistic model significantly ([Formula: see text]) outperforms a patch-based AlexNet model by 9% in DSC and 13% in IoU. Overall, the method achieves state-of-the-art performance as compared with other MRI prostate segmentation methods in the literature.
- Published
- 2017
- Full Text
- View/download PDF
10. Special Section Guest Editorial: Radiomics and Deep Learning.
- Author
-
Kontos D, Summers RM, and Giger M
- Abstract
This guest editorial introduces and summarizes the JMI Special Section on Radiomics and Deep Learning.
- Published
- 2017
- Full Text
- View/download PDF
11. Detection of prostate cancer in multiparametric MRI using random forest with instance weighting.
- Author
-
Lay N, Tsehay Y, Greer MD, Turkbey B, Kwak JT, Choyke PL, Pinto P, Wood BJ, and Summers RM
- Abstract
A prostate computer-aided diagnosis (CAD) based on random forest to detect prostate cancer using a combination of spatial, intensity, and texture features extracted from three sequences, T2W, ADC, and B2000 images, is proposed. The random forest training considers instance-level weighting for equal treatment of small and large cancerous lesions as well as small and large prostate backgrounds. Two other approaches, based on an AutoContext pipeline intended to make better use of sequence-specific patterns, were considered. One pipeline uses random forest on individual sequences while the other uses an image filter described to produce probability map-like images. These were compared to a previously published CAD approach based on support vector machine (SVM) evaluated on the same data. The random forest, features, sampling strategy, and instance-level weighting improve prostate cancer detection performance [area under the curve (AUC) 0.93] in comparison to SVM (AUC 0.86) on the same test data. Using a simple image filtering technique as a first-stage detector to highlight likely regions of prostate cancer helps with learning stability over using a learning-based approach owing to visibility and ambiguity of annotations in each sequence.
- Published
- 2017
- Full Text
- View/download PDF
12. Mixed spine metastasis detection through positron emission tomography/computed tomography synthesis and multiclassifier.
- Author
-
Yao J, Burns JE, Sanoria V, and Summers RM
- Abstract
Bone metastases are a frequent occurrence with cancer, and early detection can guide the patient's treatment regimen. Metastatic bone disease can present in density extremes as sclerotic (high density) and lytic (low density) or in a continuum with an admixture of both sclerotic and lytic components. We design a framework to detect and characterize the varying spectrum of presentation of spine metastasis on positron emission tomography/computed tomography (PET/CT) data. A technique is proposed to synthesize CT and PET images to enhance the lesion appearance for computer detection. A combination of watershed, graph cut, and level set algorithms is first run to obtain the initial detections. Detections are then sent to multiple classifiers for sclerotic, lytic, and mixed lesions. The system was tested on 44 cases with 225 sclerotic, 139 lytic, and 92 mixed lesions. The results showed that sensitivity (false positive per patient) was 0.81 (2.1), 0.81 (1.3), and 0.76 (2.1) for sclerotic, lytic, and mixed lesions, respectively. It also demonstrates that using PET/CT data significantly improves the computer aided detection performance over using CT alone.
- Published
- 2017
- Full Text
- View/download PDF
13. Automated segmentation of the thyroid gland on thoracic CT scans by multiatlas label fusion and random forest classification.
- Author
-
Narayanan D, Liu J, Kim L, Chang KW, Lu L, Yao J, Turkbey EB, and Summers RM
- Abstract
The thyroid is an endocrine gland that regulates metabolism. Thyroid image analysis plays an important role in both diagnostic radiology and radiation oncology treatment planning. Low tissue contrast of the thyroid relative to surrounding anatomic structures makes manual segmentation of this organ challenging. This work proposes a fully automated system for thyroid segmentation on CT imaging. Following initial thyroid segmentation with multiatlas joint label fusion, a random forest (RF) algorithm was applied. Multiatlas label fusion transfers labels from labeled atlases and warps them to target images using deformable registration. A consensus atlas solution was formed based on optimal weighting of atlases and similarity to a given target image. Following the initial segmentation, a trained RF classifier employed voxel scanning to assign class-conditional probabilities to the voxels in the target image. Thyroid voxels were categorized with positive labels and nonthyroid voxels were categorized with negative labels. Our method was evaluated on CT scans from 66 patients, 6 of which served as atlases for multiatlas label fusion. The system with independent multiatlas label fusion method and RF classifier achieved average dice similarity coefficients of [Formula: see text] and [Formula: see text], respectively. The system with sequential multiatlas label fusion followed by RF correction increased the dice similarity coefficient to [Formula: see text] and improved the segmentation accuracy.
- Published
- 2015
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.