30 results on '"Zhoubing Xu"'
Search Results
2. A deep image-to-image network organ segmentation algorithm for radiation treatment planning: principles and evaluation
- Author
-
Sebastian Marschner, Manasi Datar, Aurélie Gaasch, Zhoubing Xu, Sasa Grbic, Guillaume Chabin, Bernhard Geiger, Julian Rosenman, Stefanie Corradini, Maximilian Niyazi, Tobias Heimann, Christian Möhler, Fernando Vega, Claus Belka, and Christian Thieke
- Subjects
Organs at Risk ,Deep Learning ,Oncology ,Radiotherapy Planning, Computer-Assisted ,Image Processing, Computer-Assisted ,Humans ,Radiology, Nuclear Medicine and imaging ,Thorax ,Tomography, X-Ray Computed ,Algorithms - Abstract
Background We describe and evaluate a deep network algorithm which automatically contours organs at risk in the thorax and pelvis on computed tomography (CT) images for radiation treatment planning. Methods The algorithm identifies the region of interest (ROI) automatically by detecting anatomical landmarks around the specific organs using a deep reinforcement learning technique. The segmentation is restricted to this ROI and performed by a deep image-to-image network (DI2IN) based on a convolutional encoder-decoder architecture combined with multi-level feature concatenation. The algorithm is commercially available in the medical products “syngo.via RT Image Suite VB50” and “AI-Rad Companion Organs RT VA20” (Siemens Healthineers). For evaluation, thoracic CT images of 237 patients and pelvic CT images of 102 patients were manually contoured following the Radiation Therapy Oncology Group (RTOG) guidelines and compared to the DI2IN results using metrics for volume, overlap and distance, e.g., Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD95). The contours were also compared visually slice by slice. Results We observed high correlations between automatic and manual contours. The best results were obtained for the lungs (DSC 0.97, HD95 2.7 mm/2.9 mm for left/right lung), followed by heart (DSC 0.92, HD95 4.4 mm), bladder (DSC 0.88, HD95 6.7 mm) and rectum (DSC 0.79, HD95 10.8 mm). Visual inspection showed excellent agreements with some exceptions for heart and rectum. Conclusions The DI2IN algorithm automatically generated contours for organs at risk close to those by a human expert, making the contouring step in radiation treatment planning simpler and faster. Few cases still required manual corrections, mainly for heart and rectum.
- Published
- 2022
3. SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth
- Author
-
Zhoubing Xu, Shunxing Bao, Yuankai Huo, Michael R. Savona, Hyeonsoo Moon, Richard G. Abramson, Tamara K. Moyo, Albert Assad, and Bennett A. Landman
- Subjects
FOS: Computer and information sciences ,Ground truth ,Modality (human–computer interaction) ,Radiological and Ultrasound Technology ,Computer science ,business.industry ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,Image segmentation ,Convolutional neural network ,Article ,030218 nuclear medicine & medical imaging ,Computer Science Applications ,03 medical and health sciences ,0302 clinical medicine ,Segmentation ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Software - Abstract
A key limitation of deep convolutional neural networks (DCNN) based image segmentation methods is the lack of generalizability. Manually traced training images are typically required when segmenting organs in a new imaging modality or from distinct disease cohort. The manual efforts can be alleviated if the manually traced images in one imaging modality (e.g., MRI) are able to train a segmentation network for another imaging modality (e.g., CT). In this paper, we propose an end-to-end synthetic segmentation network (SynSeg-Net) to train a segmentation network for a target imaging modality without having manual labels. SynSeg-Net is trained by using (1) unpaired intensity images from source and target modalities, and (2) manual labels only from source modality. SynSeg-Net is enabled by the recent advances of cycle generative adversarial networks (CycleGAN) and DCNN. We evaluate the performance of the SynSeg-Net on two experiments: (1) MRI to CT splenomegaly synthetic segmentation for abdominal images, and (2) CT to MRI total intracranial volume synthetic segmentation (TICV) for brain images. The proposed end-to-end approach achieved superior performance to two stage methods. Moreover, the SynSeg-Net achieved comparable performance to the traditional segmentation network using target modality labels in certain scenarios. The source code of SynSeg-Net is publicly available (https://github.com/MASILab/SynSeg-Net)., Comment: IEEE Transactions on Medical Imaging (TMI)
- Published
- 2019
- Full Text
- View/download PDF
4. Pancreas CT Segmentation by Predictive Phenotyping
- Author
-
Yucheng Tang, Xin Yu, Jeffrey M. Spraggins, Riqiang Gao, Bennett A. Landman, Shunxing Bao, Yuyin Zhou, Zhoubing Xu, Yuankai Huo, John Virostko, Ho Hin Lee, and Qi Yang
- Subjects
Pancreas ct ,Wilcoxon signed-rank test ,Computer science ,business.industry ,Pattern recognition ,medicine.disease ,medicine.anatomical_structure ,Discriminative model ,Electronic health record ,medicine ,Pancreatitis ,Segmentation ,Artificial intelligence ,Pancreatic cysts ,Pancreas ,business - Abstract
Pancreas CT segmentation offers promise at understanding the structural manifestation of metabolic conditions. To date, the medical primary record of conditions that impact the pancreas is in the electronic health record (EHR) in terms of diagnostic phenotype data (e.g., ICD-10 codes). We posit that similar structural phenotypes could be revealed by studying subjects with similar medical outcomes. Segmentation is mainly driven by imaging data, but this direct approach may not consider differing canonical appearances with different underlying conditions (e.g., pancreatic atrophy versus pancreatic cysts). To this end, we exploit clinical features from EHR data to complement image features for enhancing the pancreas segmentation, especially in high-risk outcomes. Specifically, we propose, to the best of our knowledge, the first phenotype embedding model for pancreas segmentation by predicting representatives that share similar comorbidities. Such an embedding strategy can adaptively refine the segmentation outcome based on the discriminative contexts distilled from clinical features. Experiments with 2000 patients’ EHR data and 300 CT images with the healthy pancreas, type II diabetes, and pancreatitis subjects show that segmentation by predictive phenotyping significantly improves performance over state-of-the-arts (Dice score 0.775 to 0.791, \(p < 0.05\), Wilcoxon signed-rank test). The proposed method additionally achieves superior performance on two public testing datasets, BTCV MICCAI Challenge 2015 and TCIA pancreas CT. Our approach provides a promising direction of advancing segmentation with phenotype features while without requiring EHR data as input during testing.
- Published
- 2021
- Full Text
- View/download PDF
5. Robust Multicontrast MRI Spleen Segmentation for Splenomegaly Using Multi-Atlas Segmentation
- Author
-
Richard G. Abramson, Albert Assad, Yuankai Huo, Zhoubing Xu, Jiaqi Liu, Bennett A. Landman, and Robert L. Harrigan
- Subjects
Iterative method ,Computer science ,MRI spleen ,Biomedical Engineering ,Spleen ,02 engineering and technology ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Atlas (anatomy) ,Cut ,Image Interpretation, Computer-Assisted ,0202 electrical engineering, electronic engineering, information engineering ,T1 weighted ,medicine ,Humans ,Segmentation ,medicine.diagnostic_test ,business.industry ,Reproducibility of Results ,Magnetic resonance imaging ,Pattern recognition ,Magnetic Resonance Imaging ,medicine.anatomical_structure ,Splenomegaly ,Outlier ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,Algorithms - Abstract
Objective: Magnetic resonance imaging (MRI) is an essential imaging modality in noninvasive splenomegaly diagnosis. However, it is challenging to achieve spleen volume measurement from three-dimensional MRI given the diverse structural variations of human abdomens as well as the wide variety of clinical MRI acquisition schemes. Multi-atlas segmentation (MAS) approaches have been widely used and validated to handle heterogeneous anatomical scenarios. In this paper, we propose to use MAS for clinical MRI spleen segmentation for splenomegaly. Methods: First, an automated segmentation method using the selective and iterative method for performance level estimation (SIMPLE) atlas selection is used to address the concerns of inhomogeneity for clinical splenomegaly MRI. Then, to further control outliers, semiautomated craniocaudal spleen length-based SIMPLE atlas selection (L-SIMPLE) is proposed to integrate a spatial prior in a Bayesian fashion and guide iterative atlas selection. Last, a graph cuts refinement is employed to achieve the final segmentation from the probability maps from MAS. Results: A clinical cohort of 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate both automated and semiautomated methods. Conclusion: The results demonstrated that both methods achieved median Dice > 0.9, and outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.97 Pearson correlation of volume measurements with the manual segmentation. Significance: In this paper, spleen segmentation on MRI splenomegaly using MAS has been performed.
- Published
- 2018
- Full Text
- View/download PDF
6. Landmark detection and multiorgan segmentation: Representations and supervised approaches
- Author
-
S. Kevin Zhou and Zhoubing Xu
- Subjects
Landmark ,Computer science ,business.industry ,Supervised learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Context (language use) ,Machine learning ,computer.software_genre ,ComputingMethodologies_PATTERNRECOGNITION ,Segmentation ,Artificial intelligence ,business ,computer ,Discriminative learning - Abstract
In this chapter we present discriminative learning approaches for landmark detection and shape segmentation. Specifically, we elaborate different landmark representations and demonstrate how to use them in different supervised learning methods. We then present various shape representations and a learning approach that fuses regression, which models global context, and classification, which models local context, for rapid multiple organ segmentation.
- Published
- 2020
- Full Text
- View/download PDF
7. Automated Quantification of CT Patterns Associated with COVID-19 from Chest CT
- Author
-
Nicolas Murray, Valentin Ziebandt, Siqi Liu, Zhoubing Xu, William Parker, Savvas Nicolaou, Pina C. Sanelli, Stuart L. Cohen, Youngjin Yoo, Dorin Comaniciu, Thomas Flohr, Shikha Chaganti, Sasa Grbic, Alexander W. Sauter, Guillaume Chabin, Bogdan Georgescu, Thomas J. Re, François Mellot, Philippe Grenier, and Abishek Balachandran
- Subjects
FOS: Computer and information sciences ,medicine.medical_specialty ,2019-20 coronavirus outbreak ,Coronavirus disease 2019 (COVID-19) ,Computer Vision and Pattern Recognition (cs.CV) ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Computer Science - Computer Vision and Pattern Recognition ,MEDLINE ,Chest ct ,Disease ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Text mining ,Artificial Intelligence ,FOS: Electrical engineering, electronic engineering, information engineering ,medicine ,Radiology, Nuclear Medicine and imaging ,Original Research ,Lung ,Radiological and Ultrasound Technology ,business.industry ,Image and Video Processing (eess.IV) ,Electrical Engineering and Systems Science - Image and Video Processing ,3. Good health ,medicine.anatomical_structure ,030220 oncology & carcinogenesis ,Radiology ,business - Abstract
Purpose: To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. Materials and Methods: In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobewise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April, 2020). Ground truth is established by manual annotations of lesions, lungs, and lobes. Correlation and regression analyses were performed to compare the prediction to the ground truth. Results: Pearson correlation coefficient between method prediction and ground truth for COVID-19 cases was calculated as 0.92 for PO (P < .001), 0.97 for PHO(P < .001), 0.91 for LSS (P < .001), 0.90 for LHOS (P < .001). 98 of 100 healthy controls had a predicted PO of less than 1%, 2 had between 1-2%. Automated processing time to compute the severity scores was 10 seconds per case compared to 30 minutes required for manual annotations. Conclusion: A new method segments regions of CT abnormalities associated with COVID-19 and computes (PO, PHO), as well as (LSS, LHOS) severity scores.
- Published
- 2020
- Full Text
- View/download PDF
8. Select, Attend, and Transfer: Light, Learnable Skip Connections
- Author
-
Yefeng Zheng, Puneet Sharma, S. Kevin Zhou, Dorin Comaniciu, Saeid Asgari Taghanaki, Ghassan Hamarneh, Bogdan Georgescu, Zhoubing Xu, Aïcha BenTaieb, and Anmol Sharma
- Subjects
Network architecture ,Computer science ,business.industry ,Computation ,Aggregate (data warehouse) ,Image segmentation ,Machine learning ,computer.software_genre ,Discriminative model ,Feature (machine learning) ,Segmentation ,Artificial intelligence ,business ,computer ,Communication channel - Abstract
Skip connections in deep networks have improved both segmentation and classification performance by facilitating the training of deeper network architectures and reducing the risks for vanishing gradients. The skip connections equip encoder-decoder like networks with richer feature representations, but at the cost of higher memory usage, computation, and possibly resulting in transferring non-discriminative feature maps. In this paper, we focus on improving the skip connections used in segmentation networks. We propose light, learnable skip connections which learn to first select the most discriminative channels, and then aggregate the selected ones as single channel attending to the most discriminative regions of input. We evaluate the proposed method on 3 different 2D and volumetric datasets and demonstrate that the proposed skip connections can outperform the traditional heavy skip connections of 4 different models in terms of segmentation accuracy (2% Dice), memory usage (at least 50%), and the number of network parameters (up to 70%).
- Published
- 2019
- Full Text
- View/download PDF
9. Efficient multi-atlas abdominal segmentation on clinically acquired CT with SIMPLE context learning
- Author
-
Christopher P. Lee, Richard G. Abramson, Zhoubing Xu, Benjamin K. Poulose, Bennett A. Landman, Rebeccah B. Baucom, and Ryan P. Burke
- Subjects
Radiography, Abdominal ,Computer science ,Iterative method ,Image registration ,Health Informatics ,Context (language use) ,Sensitivity and Specificity ,Article ,Pattern Recognition, Automated ,Machine Learning ,Cut ,Humans ,Radiology, Nuclear Medicine and imaging ,Computer vision ,Segmentation ,Selection (genetic algorithm) ,Radiological and Ultrasound Technology ,Atlas (topology) ,business.industry ,Liver Neoplasms ,Reproducibility of Results ,Pattern recognition ,Computer Graphics and Computer-Aided Design ,Radiographic Image Enhancement ,Radiographic Image Interpretation, Computer-Assisted ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Tomography, X-Ray Computed ,business ,Algorithms - Abstract
Abdominal segmentation on clinically acquired computed tomography (CT) has been a challenging problem given the inter-subject variance of human abdomens and complex 3-D relationships among organs. Multi-atlas segmentation (MAS) provides a potentially robust solution by leveraging label atlases via image registration and statistical fusion. We posit that the efficiency of atlas selection requires further exploration in the context of substantial registration errors. The selective and iterative method for performance level estimation (SIMPLE) method is a MAS technique integrating atlas selection and label fusion that has proven effective for prostate radiotherapy planning. Herein, we revisit atlas selection and fusion techniques for segmenting 12 abdominal structures using clinically acquired CT. Using a re-derived SIMPLE algorithm, we show that performance on multi-organ classification can be improved by accounting for exogenous information through Bayesian priors (so called context learning). These innovations are integrated with the joint label fusion (JLF) approach to reduce the impact of correlated errors among selected atlases for each organ, and a graph cut technique is used to regularize the combined segmentation. In a study of 100 subjects, the proposed method outperformed other comparable MAS approaches, including majority vote, SIMPLE, JLF, and the Wolz locally weighted vote technique. The proposed technique provides consistent improvement over state-of-the-art approaches (median improvement of 7.0% and 16.2% in DSC over JLF and Wolz, respectively) and moves toward efficient segmentation of large-scale clinically acquired CT data for biomarker screening, surgical navigation, and data mining.
- Published
- 2015
- Full Text
- View/download PDF
10. Class-Aware Adversarial Lung Nodule Synthesis in CT Images
- Author
-
Siqi Liu, Guillaume Chabin, Andrew F. Laine, Bogdan Georgescu, Eli Gibson, Arnaud Arindra Adiyoso Setio, Sasa Grbic, Dorin Comaniciu, Jie Yang, and Zhoubing Xu
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Context (language use) ,Malignancy ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine ,Medical imaging ,business.industry ,Deep learning ,Supervised learning ,Pattern recognition ,Nodule (medicine) ,Real image ,medicine.disease ,Class (biology) ,ComputingMethodologies_PATTERNRECOGNITION ,Binary classification ,030220 oncology & carcinogenesis ,Artificial intelligence ,medicine.symptom ,business - Abstract
Though large-scale datasets are essential for training deep learning systems, it is expensive to scale up the collection of medical imaging datasets. Synthesizing the objects of interests, such as lung nodules, in medical images based on the distribution of annotated datasets can be helpful for improving the supervised learning tasks, especially when the datasets are limited by size and class balance. In this paper, we propose the class-aware adversarial synthesis framework to synthesize lung nodules in CT images. The framework is built with a coarse-to-fine patch in-painter (generator) and two class-aware discriminators. By conditioning on the random latent variables and the target nodule labels, the trained networks are able to generate diverse nodules given the same context. By evaluating on the public LIDC-IDRI dataset, we demonstrate an example application of the proposed framework for improving the accuracy of the lung nodule malignancy estimation as a binary classification problem, which is important in the lung screening scenario. We show that combining the real image patches and the synthetic lung nodules in the training set can improve the mean AUC classification score across different network architectures by 2%.
- Published
- 2018
- Full Text
- View/download PDF
11. Spatially Localized Atlas Network Tiles Enables 3D Whole Brain Segmentation from Limited Data
- Author
-
Yuankai Huo, Camilo Bermudez, Katherine S. Aboud, Zhoubing Xu, Laurie E. Cutting, Bennett A. Landman, Prasanna Parvathaneni, Susan M. Resnick, and Shunxing Bao
- Subjects
Artificial neural network ,Computer science ,Atlas (topology) ,business.industry ,Pattern recognition ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Brain size ,Brain segmentation ,Segmentation ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Whole brain segmentation on a structural magnetic resonance imaging (MRI) is essential in non-invasive investigation for neuroanatomy. Historically, multi-atlas segmentation (MAS) has been regarded as the de facto standard method for whole brain segmentation. Recently, deep neural network approaches have been applied to whole brain segmentation by learning random patches or 2D slices. Yet, few previous efforts have been made on detailed whole brain segmentation using 3D networks due to the following challenges: (1) fitting entire whole brain volume into 3D networks is restricted by the current GPU memory, and (2) the large number of targeting labels (e.g., >100 labels) with limited number of training 3D volumes (e.g., 30 h using MAS to \(\approx \)15 min using the proposed method. The source code is available online (https://github.com/MASILab/SLANTbrainSeg).
- Published
- 2018
- Full Text
- View/download PDF
12. Less is More: Simultaneous View Classification and Landmark Detection for Abdominal Ultrasound Images
- Author
-
Andy Milkowski, Sasa Grbic, Bennett A. Landman, Zhoubing Xu, Jin Hyeong Park, Yuankai Huo, and Shaohua Kevin Zhou
- Subjects
Modality (human–computer interaction) ,Landmark ,Computer science ,business.industry ,Process (engineering) ,Ultrasound ,Feature extraction ,02 engineering and technology ,Machine learning ,computer.software_genre ,Convolutional neural network ,030218 nuclear medicine & medical imaging ,Task (project management) ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Leverage (statistics) ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
An abdominal ultrasound examination, which is the most common ultrasound examination, requires substantial manual efforts to acquire standard abdominal organ views, annotate the views in texts, and record clinically relevant organ measurements. Hence, automatic view classification and landmark detection of the organs can be instrumental to streamline the examination workflow. However, this is a challenging problem given not only the inherent difficulties from the ultrasound modality, e.g., low contrast and large variations, but also the heterogeneity across tasks, i.e., one classification task for all views, and then one landmark detection task for each relevant view. While convolutional neural networks (CNN) have demonstrated more promising outcomes on ultrasound image analytics than traditional machine learning approaches, it becomes impractical to deploy multiple networks (one for each task) due to the limited computational and memory resources on most existing ultrasound scanners. To overcome such limits, we propose a multi-task learning framework to handle all the tasks by a single network. This network is integrated to perform view classification and landmark detection simultaneously; it is also equipped with global convolutional kernels, coordinate constraints, and a conditional adversarial module to leverage the performances. In an experimental study based on 187,219 ultrasound images, with the proposed simplified approach we achieve (1) view classification accuracy better than the agreement between two clinical experts and (2) landmark-based measurement errors on par with inter-user variability. The multi-task approach also benefits from sharing the feature extraction during the training process across all tasks and, as a result, outperforms the approaches that address each task individually.
- Published
- 2018
- Full Text
- View/download PDF
13. Multi-atlas Segmentation Enables Robust Multi-contrast MRI Spleen Segmentation for Splenomegaly
- Author
-
Robert L. Harrigan, Bennett A. Landman, Yuankai Huo, Albert Assad, Zhoubing Xu, Jiaqi Liu, and Richard G. Abramson
- Subjects
medicine.diagnostic_test ,Computer science ,Iterative method ,business.industry ,Pattern recognition ,Spleen ,Magnetic resonance imaging ,02 engineering and technology ,Image segmentation ,Real-time MRI ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,medicine.anatomical_structure ,Atlas (anatomy) ,Outlier ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Abdomen ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,business - Abstract
Non-invasive spleen volume estimation is essential in detecting splenomegaly. Magnetic resonance imaging (MRI) has been used to facilitate splenomegaly diagnosis in vivo. However, achieving accurate spleen volume estimation from MR images is challenging given the great inter-subject variance of human abdomens and wide variety of clinical images/modalities. Multi-atlas segmentation has been shown to be a promising approach to handle heterogeneous data and difficult anatomical scenarios. In this paper, we propose to use multi-atlas segmentation frameworks for MRI spleen segmentation for splenomegaly. To the best of our knowledge, this is the first work that integrates multi-atlas segmentation for splenomegaly as seen on MRI. To address the particular concerns of spleen MRI, automated and novel semi-automated atlas selection approaches are introduced. The automated approach interactively selects a subset of atlases using selective and iterative method for performance level estimation (SIMPLE) approach. To further control the outliers, semi-automated craniocaudal length based SIMPLE atlas selection (L-SIMPLE) is proposed to introduce a spatial prior in a fashion to guide the iterative atlas selection. A dataset from a clinical trial containing 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate different methods. Both automated and semi-automated methods achieved median DSC > 0.9. The outliers were alleviated by the L-SIMPLE (≈1 min manual efforts per scan), which achieved 0.9713 Pearson correlation compared with the manual segmentation. The results demonstrated that the multi-atlas segmentation is able to achieve accurate spleen segmentation from the multi-contrast splenomegaly MRI scans.
- Published
- 2017
14. Multi-atlas spleen segmentation on CT using adaptive context learning
- Author
-
Yuankai Huo, Bennett A. Landman, Albert Assad, Zhoubing Xu, Jiaqi Liu, and Richard G. Abramson
- Subjects
Training set ,Computer science ,Atlas (topology) ,business.industry ,Multi atlas ,Pattern recognition ,02 engineering and technology ,Image segmentation ,Mixture model ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Context learning ,020201 artificial intelligence & image processing ,Segmentation ,Computer vision ,Artificial intelligence ,business - Abstract
Automatic spleen segmentation on CT is challenging due to the complexity of abdominal structures. Multi-atlas segmentation (MAS) has shown to be a promising approach to conduct spleen segmentation. To deal with the substantial registration errors between the heterogeneous abdominal CT images, the context learning method for performance level estimation (CLSIMPLE) method was previously proposed. The context learning method generates a probability map for a target image using a Gaussian mixture model (GMM) as the prior in a Bayesian framework. However, the CLSSIMPLE typically trains a single GMM from the entire heterogeneous training atlas set. Therefore, the estimated spatial prior maps might not represent specific target images accurately. Rather than using all training atlases, we propose an adaptive GMM based context learning technique (AGMMCL) to train the GMM adaptively using subsets of the training data with the subsets tailored for different target images. Training sets are selected adaptively based on the similarity between atlases and the target images using cranio-caudal length, which is derived manually from the target image. To validate the proposed method, a heterogeneous dataset with a large variation of spleen sizes (100 cc to 9000 cc) is used. We designate a metric of size to differentiate each group of spleens, with 0 to 100 cc as small, 200 to 500cc as medium, 500 to 1000 cc as large, 1000 to 2000 cc as XL, and 2000 and above as XXL. From the results, AGMMCL leads to more accurate spleen segmentations by training GMMs adaptively for different target images.
- Published
- 2017
- Full Text
- View/download PDF
15. Supervised Action Classifier: Approaching Landmark Detection as Image Partitioning
- Author
-
Qiangui Huang, David Liu, Mingqing Chen, Zhoubing Xu, Jin-hyeong Park, Dong Yang, Daguang Xu, and S. Kevin Zhou
- Subjects
Landmark ,business.industry ,Computer science ,Deep learning ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Pattern recognition ,02 engineering and technology ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,Medical imaging ,Leverage (statistics) ,Reinforcement learning ,020201 artificial intelligence & image processing ,Computer vision ,Artificial intelligence ,business ,Classifier (UML) - Abstract
In medical imaging, landmarks have significant clinical and scientific importance. Clinical measurements, derived from the landmarks, are used for diagnosis, therapy planning and interventional guidance in many cases. Automatic algorithms have been studied to reduce the need for manual placement of landmarks. Traditional machine learning techniques provide reasonable results; however, they have limitation of either robustness or precision given complexities and variabilities of the medical images. Recently, deep learning technologies have been emerging to tackle the problems. Among them, a deep reinforcement learning approach (DRL) has shown to successfully detect landmark locations by implicitly learning the optimized path from a starting location; however, its learning process can only include subsets of the almost infinite paths across the image context, and may lead to major failures if not trained with adequate dataset variations. Here, we propose a new landmark detection approach inspired from DRL. Instead of learning limited action paths in an image in a greedy manner, we construct a global action map across the whole image, which divides the image into four action regions (left, right, up and bottom) depending on the relative location towards the target landmark. The action map guides how to move to reach the target landmark from any location of the input image. This effectively translates the landmark detection problem into an image partition problem which enables us to leverage a deep image-to-image network to train a supervised action classifier for detection of the landmarks. We discuss the experiment results of two ultrasound datasets (cardiac and obstetric) by applying the proposed algorithm. It shows consistent improvement over traditional machine learning based and deep learning based methods.
- Published
- 2017
- Full Text
- View/download PDF
16. Deep Image-to-Image Recurrent Network with Shape Basis Learning for Automatic Vertebra Labeling in Large-Scale 3D CT Volumes
- Author
-
Dong Yang, Tao Xiong, Daguang Xu, S. Kevin Zhou, Zhoubing Xu, Mingqing Chen, JinHyeong Park, Sasa Grbic, Trac D. Tran, Sang Peter Chin, Dimitris Metaxas, and Dorin Comaniciu
- Subjects
musculoskeletal diseases ,medicine.diagnostic_test ,Basis (linear algebra) ,Computer science ,business.industry ,Computed tomography ,musculoskeletal system ,Image (mathematics) ,Vertebra ,medicine.anatomical_structure ,medicine ,Computer vision ,Artificial intelligence ,Scale (map) ,business ,Volume (compression) - Abstract
A method and apparatus for automated vertebra localization and identification in a 3D computed tomography (CT) volumes is disclosed. Initial vertebra locations in a 3D CT volume of a patient are predicted for a plurality of vertebrae corresponding to a plurality of vertebra labels using a trained deep image-to-image network (DI2IN). The initial vertebra locations for the plurality of vertebrae predicted using the DI2IN are refined using a trained recurrent neural network, resulting in an updated set of vertebra locations for the plurality of vertebrae corresponding to the plurality of vertebrae labels. Final vertebra locations in the 3D CT volume for the plurality of vertebrae corresponding to the plurality of vertebra labels are determined by refining the updated set of vertebra locations using a trained shape-basis deep neural network.
- Published
- 2017
- Full Text
- View/download PDF
17. Automatic Vertebra Labeling in Large-Scale 3D CT Using Deep Image-to-Image Network with Message Passing and Sparsity Regularization
- Author
-
Dorin Comaniciu, Tao Xiong, Jin-Hyeong Park, Dimitris N. Metaxas, Daguang Xu, Mingqing Chen, Dong Yang, Trac D. Tran, Sang Peter Chin, Zhoubing Xu, Qiangui Huang, S. Kevin Zhou, and David Liu
- Subjects
Relation (database) ,business.industry ,Computer science ,Message passing ,Concatenation ,Centroid ,Sparse approximation ,Regularization (mathematics) ,Surgical planning ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Feature (computer vision) ,Computer vision ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Automatic localization and labeling of vertebra in 3D medical images plays an important role in many clinical tasks, including pathological diagnosis, surgical planning and postoperative assessment. However, the unusual conditions of pathological cases, such as the abnormal spine curvature, bright visual imaging artifacts caused by metal implants, and the limited field of view, increase the difficulties of accurate localization. In this paper, we propose an automatic and fast algorithm to localize and label the vertebra centroids in 3D CT volumes. First, we deploy a deep image-to-image network (DI2IN) to initialize vertebra locations, employing the convolutional encoder-decoder architecture together with multi-level feature concatenation and deep supervision. Next, the centroid probability maps from DI2IN are iteratively evolved with the message passing schemes based on the mutual relation of vertebra centroids. Finally, the localization results are refined with sparsity regularization. The proposed method is evaluated on a public dataset of 302 spine CT volumes with various pathologies. Our method outperforms other state-of-the-art methods in terms of localization accuracy. The run time is around 3 seconds on average per case. To further boost the performance, we retrain the DI2IN on additional 1000+ 3D CT volumes from different patients. To the best of our knowledge, this is the first time more than 1000 3D CT volumes with expert annotation are adopted in experiments for the anatomic landmark detection tasks. Our experimental results show that training with such a large dataset significantly improves the performance and the overall identification rate, for the first time by our knowledge, reaches 90%.
- Published
- 2017
- Full Text
- View/download PDF
18. Whole abdominal wall segmentation using augmented active shape models (AASM) with multi-atlas label fusion and level set
- Author
-
Richard G. Abramson, Rebeccah B. Baucom, Zhoubing Xu, Bennett A. Landman, and Benjamin K. Poulose
- Subjects
medicine.diagnostic_test ,business.industry ,Computer science ,Computed tomography ,Pattern recognition ,02 engineering and technology ,Image segmentation ,Article ,030218 nuclear medicine & medical imaging ,Abdominal wall ,03 medical and health sciences ,0302 clinical medicine ,Hausdorff distance ,medicine.anatomical_structure ,Level set ,Active shape model ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,Abdomen ,020201 artificial intelligence & image processing ,Segmentation ,Artificial intelligence ,business - Abstract
The abdominal wall is an important structure differentiating subcutaneous and visceral compartments and intimately involved with maintaining abdominal structure. Segmentation of the whole abdominal wall on routinely acquired computed tomography (CT) scans remains challenging due to variations and complexities of the wall and surrounding tissues. In this study, we propose a slice-wise augmented active shape model (AASM) approach to robustly segment both the outer and inner surfaces of the abdominal wall. Multi-atlas label fusion (MALF) and level set (LS) techniques are integrated into the traditional ASM framework. The AASM approach globally optimizes the landmark updates in the presence of complicated underlying local anatomical contexts. The proposed approach was validated on 184 axial slices of 20 CT scans. The Hausdorff distance against the manual segmentation was significantly reduced using proposed approach compared to that using ASM, MALF, and LS individually. Our segmentation of the whole abdominal wall enables the subcutaneous and visceral fat measurement, with high correlation to the measurement derived from manual segmentation. This study presents the first generic algorithm that combines ASM, MALF, and LS, and demonstrates practical application for automatically capturing visceral and subcutaneous fat volumes.
- Published
- 2016
- Full Text
- View/download PDF
19. Evaluation of body-wise and organ-wise registrations for abdominal organs
- Author
-
Benjamin K. Poulose, Ryan P. Burke, Bennett A. Landman, Rebeccah B. Baucom, Sahil A. Panjwani, Christopher P. Lee, Richard G. Abramson, and Zhoubing Xu
- Subjects
Similarity (geometry) ,medicine.diagnostic_test ,business.industry ,Computer science ,Normalization (image processing) ,Image registration ,Computed tomography ,Image segmentation ,Article ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Hausdorff distance ,medicine ,Computer vision ,Segmentation ,Artificial intelligence ,business ,030217 neurology & neurosurgery - Abstract
Identifying cross-sectional and longitudinal correspondence in the abdomen on computed tomography (CT) scans is necessary for quantitatively tracking change and understanding population characteristics, yet abdominal image registration is a challenging problem. The key difficulty in solving this problem is huge variations in organ dimensions and shapes across subjects. The current standard registration method uses the global or body-wise registration technique, which is based on the global topology for alignment. This method (although producing decent results) has substantial influence of outliers, thus leaving room for significant improvement. Here, we study a new image registration approach using local (organ-wise registration) by first creating organ-specific bounding boxes and then using these regions of interest (ROIs) for aligning references to target. Based on Dice Similarity Coefficient (DSC), Mean Surface Distance (MSD) and Hausdorff Distance (HD), the organ-wise approach is demonstrated to have significantly better results by minimizing the distorting effects of organ variations. This paper compares exclusively the two registration methods by providing novel quantitative and qualitative comparison data and is a subset of the more comprehensive problem of improving the multi-atlas segmentation by using organ normalization.
- Published
- 2016
- Full Text
- View/download PDF
20. Segmentation of malignant gliomas through remote collaboration and statistical fusion
- Author
-
Zhoubing Xu, Reid C. Thompson, Andrew J. Asman, Lola B. Chambless, Bennett A. Landman, and Eesha Singh
- Subjects
Computer science ,education ,Population ,Image processing ,030218 nuclear medicine & medical imaging ,03 medical and health sciences ,0302 clinical medicine ,Glioma ,medicine ,Medical imaging ,Segmentation ,Image fusion ,Ground truth ,education.field_of_study ,medicine.diagnostic_test ,business.industry ,Cancer ,Magnetic resonance imaging ,Pattern recognition ,General Medicine ,Image segmentation ,Edge enhancement ,Gold standard (test) ,medicine.disease ,Artificial intelligence ,business ,Nuclear medicine ,030217 neurology & neurosurgery - Abstract
Purpose: Malignant gliomas represent an aggressive class of central nervous system neoplasms. Correlation of interventional outcomes with tumor morphometry data necessitates 3D segmentation of tumors (typically based on magnetic resonance imaging). Expert delineation is the long-held gold standard for tumor segmentation, but is exceptionally resource intensive and subject to intrarater and inter-rater variability. Automated tumor segmentation algorithms have been demonstrated for a variety of imaging modalities and tumor phenotypes, but translation of these methods across clinical study designs is problematic given variation in image acquisition, tumor characteristics, segmentation objectives, and validation criteria. Herein, the authors demonstrate an alternative approach for high-throughput tumor segmentation using Internet-based, collaborative labeling. Methods: In a study of 85 human raters and 98 tumor patients, raters were recruited from a general university campus population (i.e., no specific medical knowledge), given minimal training, and provided web-based tools to label MRI images based on 2D cross sections. The labeling goal was characterized as to extract the enhanced tumor cores on T1-weighted MRI and the bright abnormality on T2-weighted MRI. An experienced rater manually constructed the ground truth volumes of a randomly sampled subcohort of 48 tumor subjects (for both T1w and T2w). Raters’ taskwise individual observations, as well as the volumewise truth estimates via statistical fusion method, were evaluated over the subjects having the ground truth. Results: Individual raters were able to reliably characterize (with >0.8 dice similarity coefficient, DSC) the gadolinium-enhancing cores and extent of the edematous areas only slightly more than half of the time. Yet, human raters were efficient in terms of providing these highly variable segmentations (less than 20 s per slice). When statistical fusion was used to combine the results of seven raters per slice for all slices in the datasets, the 3D agreement of the fused results with expertly delineated segmentations was on par with the inter-rater reliability observed between experienced raters using traditional 3D tools (approximately 0.85 DSC). The cumulative time spent per tumor patient with the collaborative approach was equivalent to that with an experienced rater, but the collaborative approach could be achieved with less training time, fewer resources, and efficient parallelization. Conclusions: Hence, collaborative labeling is a promising technique with potentially wide applicability to cost-effective manual labeling of medical images.
- Published
- 2012
- Full Text
- View/download PDF
21. Evaluation of five image registration tools for abdominal CT: pitfalls and opportunities with soft anatomy
- Author
-
Bennett A. Landman, Rebeccah B. Baucom, Zhoubing Xu, Christopher P. Lee, Richard G. Abramson, Benjamin K. Poulose, and Ryan P. Burke
- Subjects
medicine.diagnostic_test ,business.industry ,Computer science ,Image registration ,Computed tomography ,Image processing ,Image segmentation ,Article ,medicine.anatomical_structure ,Hausdorff distance ,Atlas (anatomy) ,medicine ,Abdomen ,Segmentation ,Computer vision ,Affine transformation ,Artificial intelligence ,Abdominal computed tomography ,business - Abstract
Image registration has become an essential image processing technique to compare data across time and individuals. With the successes in volumetric brain registration, general-purpose software tools are beginning to be applied to abdominal computed tomography (CT) scans. Herein, we evaluate five current tools for registering clinically acquired abdominal CT scans. Twelve abdominal organs were labeled on a set of 20 atlases to enable assessment of correspondence. The 20 atlases were pairwise registered based on only intensity information with five registration tools (affine IRTK, FNIRT, Non-Rigid IRTK, NiftyReg, and ANTs). Following the brain literature, the Dice similarity coefficient (DSC), mean surface distance, and Hausdorff distance were calculated on the registered organs individually. However, interpretation was confounded due to a significant proportion of outliers. Examining the retrospectively selected top 1 and 5 atlases for each target revealed that there was a substantive performance difference between methods. To further our understanding, we constructed majority vote segmentation with the top 5 DSC values for each organ and target. The results illustrated a median improvement of 85% in DSC between the raw results and majority vote. These experiments show that some images may be well registered to some targets using the available software tools, but there is significant room for improvement and reveals the need for innovation and research in the field of registration in abdominal CTs. If image registration is to be used for local interpretation of abdominal CT, great care must be taken to account for outliers (e.g., atlas selection in statistical fusion).
- Published
- 2015
- Full Text
- View/download PDF
22. Shape-constrained multi-atlas segmentation of spleen in CT
- Author
-
Richard G. Abramson, Andrew J. Asman, Zhoubing Xu, Bennett A. Landman, Swetasudha Panda, Bo Li, Peter L. Shanahan, and Kristen Merkle
- Subjects
Implicit Shape Model ,business.industry ,Computer science ,Hausdorff space ,Pattern recognition ,computer.software_genre ,Article ,Constraint (information theory) ,Level set ,Multi atlas segmentation ,Segmentation ,Artificial intelligence ,Data mining ,business ,computer - Abstract
Spleen segmentation on clinically acquired CT data is a challenging problem given the complicity and variability of abdominal anatomy. Multi-atlas segmentation is a potential method for robust estimation of spleen segmentations, but can be negatively impacted by registration errors. Although labeled atlases explicitly capture information related to feasible organ shapes, multi-atlas methods have largely used this information implicitly through registration. We propose to integrate a level set shape model into the traditional label fusion framework to create a shape-constrained multi-atlas segmentation framework. Briefly, we (1) adapt two alternative atlas-to-target registrations to obtain the loose bounds on the inner and outer boundaries of the spleen shape, (2) project the fusion estimate to registered shape models, and (3) convert the projected shape into shape priors. With the constraint of the shape prior, our proposed method offers a statistically significant improvement in spleen labeling accuracy with an increase in DSC by 0.06, a decrease in symmetric mean surface distance by 4.01 mm, and a decrease in symmetric Hausdorff surface distance by 23.21 mm when compared to a locally weighted vote (LWV) method.
- Published
- 2014
- Full Text
- View/download PDF
23. Quantitative Anatomical Labeling of the Anterior Abdominal Wall
- Author
-
Andrew J. Asman, Bennett A. Landman, Zhoubing Xu, Benjamin K. Poulose, and Wade M. Allen
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,medicine.medical_treatment ,Computed tomography ,Hernia repair ,medicine.disease ,Article ,Abdominal wall ,medicine.anatomical_structure ,Ventral hernia ,medicine ,Performed Procedure ,Abdomen ,Hernia ,Radiology ,Presentation (obstetrics) ,business - Abstract
Ventral hernias (VHs) are abnormal openings in the anterior abdominal wall that are common side effects of surgical intervention. Repair of VHs is the most commonly performed procedure by general surgeons worldwide, but VH repair outcomes are not particularly encouraging (with recurrence rates up to 43%). A variety of open and laparoscopic techniques are available for hernia repair, and the specific technique used is ultimately driven by surgeon preference and experience. Despite routine acquisition of computed tomography (CT) for VH patients, little quantitative information is available on which to guide selection of a particular approach and/or optimize patient-specific treatment. From anecdotal interviews, the success of VH repair procedures correlates with hernia size, location, and involvement of secondary structures. Herein, we propose an image labeling protocol to segment the anterior abdominal area to provide a geometric basis with which to derive biomarkers and evaluate treatment efficacy. Based on routine clinical CT data, we are able to identify inner and outer surfaces of the abdominal walls and the herniated volume. This is the first formal presentation of a protocol to quantify these structures on abdominal CT. The intra- and inter rater reproducibilities of this protocol are evaluated on 4 patients with suspected VH (3 patients were ultimately diagnosed with VH while 1 was not). Mean surfaces distances of less than 2mm were achieved for all structures.
- Published
- 2014
24. Automatic Segmentation of Abdominal Wall in Ventral Hernia CT: A Pilot Study
- Author
-
Bennett A. Landman, Zhoubing Xu, Wade M. Allen, and Benjamin K. Poulose
- Subjects
medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Image processing ,Computed tomography ,Context (language use) ,Image segmentation ,Article ,Abdominal wall ,medicine.anatomical_structure ,Ventral hernia ,medicine ,Abdomen ,Segmentation ,Radiology ,business - Abstract
The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24-43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments; notably, quantitative metrics based on image-processing are not used. We propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention. To date, automated segmentation algorithms have not been presented to quantify the abdominal wall and potential hernias. In this pilot study with four clinically acquired CT scans on post-operative patients, we demonstrate a novel approach to geometric classification of the abdominal wall and essential abdominal features (including bony landmarks and skin surfaces). Our approach uses a hierarchical design in which the abdominal wall is isolated in the context of the skin and bony structures using level set methods. All segmentation results were quantitatively validated with surface errors based on manually labeled ground truth. Mean surface errors for the outer surface of the abdominal wall was less than 2mm. This approach establishes a baseline for characterizing the abdominal wall for improving VH care.
- Published
- 2014
25. Immersive virtual reality for visualization of abdominal CT
- Author
-
Zhoubing Xu, Qiufeng Lin, Benjamin K. Poulose, Bennett A. Landman, Bo Li, Rebeccah B. Baucom, and Robert E. Bodenheimer
- Subjects
Multimedia ,Computer science ,medicine.medical_treatment ,Stereoscopy ,Wired glove ,Virtual reality ,medicine.disease ,Entire abdomen ,Hernia repair ,computer.software_genre ,Article ,Visualization ,law.invention ,Abdominal wall ,medicine.anatomical_structure ,law ,Ventral hernia ,medicine ,Medical imaging ,Abdomen ,Hernia ,Abdominal computed tomography ,computer - Abstract
Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two-dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.
- Published
- 2013
- Full Text
- View/download PDF
26. Collaborative Labeling of Malignant Glioma with WebMILL: A First Look
- Author
-
Lola B. Chambless, Andrew J. Asman, Bennett A. Landman, Reid C. Thompson, Eesha Singh, and Zhoubing Xu
- Subjects
Surgical resection ,medicine.diagnostic_test ,Computer science ,business.industry ,education ,Cancer ,Magnetic resonance imaging ,medicine.disease ,Machine learning ,computer.software_genre ,Primary Neoplasm ,Article ,Resource (project management) ,Glioma ,medicine ,Neoplasm ,Segmentation ,Artificial intelligence ,Set (psychology) ,business ,computer - Abstract
Malignant gliomas are the most common form of primary neoplasm in the central nervous system, and one of the most rapidly fatal of all human malignancies. They are treated by maximal surgical resection followed by radiation and chemotherapy. Herein, we seek to improve the methods available to quantify the extent of tumors using newly presented, collaborative labeling techniques on magnetic resonance imaging. Traditionally, labeling medical images has entailed that expert raters operate on one image at a time, which is resource intensive and not practical for very large datasets. Using many, minimally trained raters to label images has the possibility of minimizing laboratory requirements and allowing high degrees of parallelism. A successful effort also has the possibility of reducing overall cost. This potentially transformative technology presents a new set of problems, because one must pose the labeling challenge in a manner accessible to people with little or no background in labeling medical images and raters cannot be expected to read detailed instructions. Hence, a different training method has to be employed. The training must appeal to all types of learners and have the same concepts presented in multiple ways to ensure that all the subjects understand the basics of labeling. Our overall objective is to demonstrate the feasibility of studying malignant glioma morphometry through statistical analysis of the collaborative efforts of many, minimally-trained raters. This study presents preliminary results on optimization of the WebMILL framework for neoplasm labeling and investigates the initial contributions of 78 raters labeling 98 whole-brain datasets.
- Published
- 2013
27. Segmentation of malignant gliomas through remote collaboration and statistical fusion
- Author
-
Zhoubing, Xu, Andrew J, Asman, Eesha, Singh, Lola, Chambless, Reid, Thompson, and Bennett A, Landman
- Subjects
Internet ,Imaging, Three-Dimensional ,Blood-Brain Barrier ,Data Interpretation, Statistical ,Image Processing, Computer-Assisted ,Edema ,Humans ,Glioma ,Cooperative Behavior ,Magnetic Resonance Physics ,Magnetic Resonance Imaging - Abstract
Malignant gliomas represent an aggressive class of central nervous system neoplasms. Correlation of interventional outcomes with tumor morphometry data necessitates 3D segmentation of tumors (typically based on magnetic resonance imaging). Expert delineation is the long-held gold standard for tumor segmentation, but is exceptionally resource intensive and subject to intrarater and inter-rater variability. Automated tumor segmentation algorithms have been demonstrated for a variety of imaging modalities and tumor phenotypes, but translation of these methods across clinical study designs is problematic given variation in image acquisition, tumor characteristics, segmentation objectives, and validation criteria. Herein, the authors demonstrate an alternative approach for high-throughput tumor segmentation using Internet-based, collaborative labeling.In a study of 85 human raters and 98 tumor patients, raters were recruited from a general university campus population (i.e., no specific medical knowledge), given minimal training, and provided web-based tools to label MRI images based on 2D cross sections. The labeling goal was characterized as to extract the enhanced tumor cores on T1-weighted MRI and the bright abnormality on T2-weighted MRI. An experienced rater manually constructed the ground truth volumes of a randomly sampled subcohort of 48 tumor subjects (for both T1w and T2w). Raters' taskwise individual observations, as well as the volume wise truth estimates via statistical fusion method, were evaluated over the subjects having the ground truth.Individual raters were able to reliably characterize (with0.8 dice similarity coefficient, DSC) the gadolinium-enhancing cores and extent of the edematous areas only slightly more than half of the time. Yet, human raters were efficient in terms of providing these highly variable segmentations (less than 20 s per slice). When statistical fusion was used to combine the results of seven raters per slice for all slices in the datasets, the 3D agreement of the fused results with expertly delineated segmentations was on par with the inter-rater reliability observed between experienced raters using traditional 3D tools (approximately 0.85 DSC). The cumulative time spent per tumor patient with the collaborative approach was equivalent to that with an experienced rater, but the collaborative approach could be achieved with less training time, fewer resources, and efficient parallelization.Hence, collaborative labeling is a promising technique with potentially wide applicability to cost-effective manual labeling of medical images.
- Published
- 2012
28. Collaborative labeling of malignant glioma
- Author
-
Zhoubing Xu, Eesha Singh, Andrew J. Asman, Reid C. Thompson, Lola B. Chambless, and Bennett A. Landman
- Subjects
Surgical resection ,medicine.medical_specialty ,medicine.diagnostic_test ,business.industry ,Image registration ,Magnetic resonance imaging ,medicine.disease ,Article ,Text mining ,Glioma ,medicine ,Medical imaging ,Medical physics ,Medical imaging data ,business - Abstract
Malignant gliomas represent an aggressive class of central nervous system neoplasms which are often treated by maximal surgical resection. Herein, we seek to improve the methods available to quantify the extent of tumors as seen on magnetic resonance imaging using Internet-based, collaborative labeling. In a study of clinically acquired images, we demonstrate that teams of minimally trained human raters are able to reliably characterize the gadolinium-enhancing core and edema tumor regions (Dice ≈ 0.9). The collaborative approach is highly parallel and efficient in terms of time (the total time spent by the collective is equivalent to that of a single expert) and resources (only minimal training and no hardware is provided to the participants). Hence, collaborative labeling is a very promising new technique with potentially wide applicability to facilitate cost-effective manual labeling of medical imaging data.
- Published
- 2012
- Full Text
- View/download PDF
29. Self-assessed performance improves statistical fusion of image labels
- Author
-
Wade M. Allen, Daniel S. Reich, Bennett A. Landman, Zhoubing Xu, Frederick W. Bryan, and Andrew J. Asman
- Subjects
Image fusion ,education.field_of_study ,business.industry ,Computer science ,Population ,Context (language use) ,Pattern recognition ,General Medicine ,Image segmentation ,Sensor fusion ,computer.software_genre ,Segmentation ,Artificial intelligence ,Data mining ,business ,education ,computer ,Information integration - Abstract
Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance. Statistical fusion resulted in statistically indistinguishable performance from self-assessed weighted voting. The authors developed a new theoretical basis for using self-assessed performance in the framework of statistical fusion and demonstrated that the combined sources of information (both statistical assessment and self-assessment) yielded statistically significant improvement over the methods considered separately. Conclusions: The authors present the first systematic characterization of self-assessed performance in manual labeling. The authors demonstrate that self-assessment and statistical fusion yield similar, but complementary, benefits for label fusion. Finally, the authors present a new theoretical basis for combining self-assessments with statistical label fusion.
- Published
- 2014
- Full Text
- View/download PDF
30. Texture analysis improves level set segmentation of the anterior abdominal wall
- Author
-
Bennett A. Landman, Rebeccah B. Baucom, Benjamin K. Poulose, Zhoubing Xu, and Wade M. Allen
- Subjects
medicine.medical_specialty ,Contextual image classification ,business.industry ,Feature extraction ,Pattern recognition ,General Medicine ,Image segmentation ,Abdominal wall ,medicine.anatomical_structure ,Image texture ,medicine ,Medical imaging ,Segmentation ,Radiology ,Artificial intelligence ,Tomography ,business - Abstract
Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention. Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall. Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture. Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture analysis can improve the level set segmentation around the abdominal region.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.