146 results on '"Bao, Shunxing"'
Search Results
2. Deep learning-based free-water correction for single-shell diffusion MRI
- Author
-
Yao, Tianyuan, Archer, Derek B., Kanakaraj, Praitayini, Newlin, Nancy, Bao, Shunxing, Moyer, Daniel, Schilling, Kurt, Landman, Bennett A., and Huo, Yuankai
- Published
- 2025
- Full Text
- View/download PDF
3. Cross-scale multi-instance learning for pathological image diagnosis
- Author
-
Deng, Ruining, Cui, Can, Remedios, Lucas W., Bao, Shunxing, Womick, R. Michael, Chiron, Sophie, Li, Jia, Roland, Joseph T., Lau, Ken S., Liu, Qi, Wilson, Keith T., Wang, Yaohong, Coburn, Lori A., Landman, Bennett A., and Huo, Yuankai
- Published
- 2024
- Full Text
- View/download PDF
4. UNesT: Local spatial representation learning with hierarchical transformer for efficient medical segmentation
- Author
-
Yu, Xin, Yang, Qi, Zhou, Yinchi, Cai, Leon Y., Gao, Riqiang, Lee, Ho Hin, Li, Thomas, Bao, Shunxing, Xu, Zhoubing, Lasko, Thomas A., Abramson, Richard G., Zhang, Zizhao, Huo, Yuankai, Landman, Bennett A., and Tang, Yucheng
- Published
- 2023
- Full Text
- View/download PDF
5. Hierarchical particle optimization for cortical shape correspondence in temporal lobe resection
- Author
-
Liu, Yue, Bao, Shunxing, Englot, Dario J., Morgan, Victoria L., Taylor, Warren D., Wei, Ying, Oguz, Ipek, Landman, Bennett A., and Lyu, Ilwoo
- Published
- 2023
- Full Text
- View/download PDF
6. Integrating the BIDS Neuroimaging Data Format and Workflow Optimization for Large-Scale Medical Image Analysis
- Author
-
Bao, Shunxing, Boyd, Brian D., Kanakaraj, Praitayini, Ramadass, Karthik, Meyer, Francisco A. C., Liu, Yuqian, Duett, William E., Huo, Yuankai, Lyu, Ilwoo, Zald, David H., Smith, Seth A., Rogers, Baxter P., and Landman, Bennett A.
- Published
- 2022
- Full Text
- View/download PDF
7. Workflow Integration of Research AI Tools into a Hospital Radiology Rapid Prototyping Environment
- Author
-
Kanakaraj, Praitayini, Ramadass, Karthik, Bao, Shunxing, Basford, Melissa, Jones, Laura M., Lee, Ho Hin, Xu, Kaiwen, Schilling, Kurt G., Carr, John Jeffrey, Terry, James Gregory, Huo, Yuankai, Sandler, Kim Lori, Netwon, Allen T., and Landman, Bennett A.
- Published
- 2022
- Full Text
- View/download PDF
8. Multi-contrast computed tomography healthy kidney atlas
- Author
-
Lee, Ho Hin, Tang, Yucheng, Xu, Kaiwen, Bao, Shunxing, Fogo, Agnes B., Harris, Raymond, de Caestecker, Mark P., Heinrich, Mattias, Spraggins, Jeffrey M., Huo, Yuankai, and Landman, Bennett A.
- Published
- 2022
- Full Text
- View/download PDF
9. High-resolution 3D abdominal segmentation with random patch network fusion
- Author
-
Tang, Yucheng, Gao, Riqiang, Lee, Ho Hin, Han, Shizhong, Chen, Yunqiang, Gao, Dashan, Nath, Vishwesh, Bermudez, Camilo, Savona, Michael R., Abramson, Richard G., Bao, Shunxing, Lyu, Ilwoo, Huo, Yuankai, and Landman, Bennett A.
- Published
- 2021
- Full Text
- View/download PDF
10. Labeling lateral prefrontal sulci using spherical data augmentation and context-aware training
- Author
-
Lyu, Ilwoo, Bao, Shunxing, Hao, Lingyan, Yao, Jewelia, Miller, Jacob A., Voorhies, Willa, Taylor, Warren D., Bunge, Silvia A., Weiner, Kevin S., and Landman, Bennett A.
- Published
- 2021
- Full Text
- View/download PDF
11. Time-distanced gates in long short-term memory networks
- Author
-
Gao, Riqiang, Tang, Yucheng, Xu, Kaiwen, Huo, Yuankai, Bao, Shunxing, Antic, Sanja L., Epstein, Emily S., Deppen, Steve, Paulson, Alexis B., Sandler, Kim L., Massion, Pierre P., and Landman, Bennett A.
- Published
- 2020
- Full Text
- View/download PDF
12. Multi-path x-D recurrent neural networks for collaborative image classification
- Author
-
Gao, Riqiang, Huo, Yuankai, Bao, Shunxing, Tang, Yucheng, Antic, Sanja L., Epstein, Emily S., Deppen, Steve, Paulson, Alexis B., Sandler, Kim L., Massion, Pierre P., and Landman, Bennett A.
- Published
- 2020
- Full Text
- View/download PDF
13. 3D whole brain segmentation using spatially localized atlas network tiles
- Author
-
Huo, Yuankai, Xu, Zhoubing, Xiong, Yunxi, Aboud, Katherine, Parvathaneni, Prasanna, Bao, Shunxing, Bermudez, Camilo, Resnick, Susan M., Cutting, Laurie E., and Landman, Bennett A.
- Published
- 2019
- Full Text
- View/download PDF
14. Towards Portable Large-Scale Image Processing with High-Performance Computing
- Author
-
Huo, Yuankai, Blaber, Justin, Damon, Stephen M., Boyd, Brian D., Bao, Shunxing, Parvathaneni, Prasanna, Noguera, Camilo Bermudez, Chaganti, Shikha, Nath, Vishwesh, Greer, Jasmine M., Lyu, Ilwoo, French, William R., Newton, Allen T., Rogers, Baxter P., and Landman, Bennett A.
- Published
- 2018
- Full Text
- View/download PDF
15. All-in-SAM: from Weak Annotation to Pixel-wise Nuclei Segmentation with Prompt-based Finetuning
- Author
-
Cui, Can, Deng, Ruining, Liu, Quan, Yao, Tianyuan, Bao, Shunxing, Remedios, Lucas W., Tang, Yucheng, and Huo, Yuankai
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (cs.LG) - Abstract
The Segment Anything Model (SAM) is a recently proposed prompt-based segmentation model in a generic zero-shot segmentation approach. With the zero-shot segmentation capacity, SAM achieved impressive flexibility and precision on various segmentation tasks. However, the current pipeline requires manual prompts during the inference stage, which is still resource intensive for biomedical image segmentation. In this paper, instead of using prompts during the inference stage, we introduce a pipeline that utilizes the SAM, called all-in-SAM, through the entire AI development workflow (from annotation generation to model finetuning) without requiring manual prompts during the inference stage. Specifically, SAM is first employed to generate pixel-level annotations from weak prompts (e.g., points, bounding box). Then, the pixel-level annotations are used to finetune the SAM segmentation model rather than training from scratch. Our experimental results reveal two key findings: 1) the proposed pipeline surpasses the state-of-the-art (SOTA) methods in a nuclei segmentation task on the public Monuseg dataset, and 2) the utilization of weak and few annotations for SAM finetuning achieves competitive performance compared to using strong pixel-wise annotated data.
- Published
- 2023
16. All-in-SAM: from Weak Annotation to Pixel-wise Nuclei Segmentation with Prompt-based Finetuning.
- Author
-
Cui, Can, Deng, Ruining, Liu, Quan, Yao, Tianyuan, Bao, Shunxing, Remedios, Lucas W., Landman, Bennett A., Tang, Yucheng, and Huo, Yuankai
- Published
- 2024
- Full Text
- View/download PDF
17. A 3D and Explainable Artificial Intelligence Model for Evaluation of Chronic Otitis Media Based on Temporal Bone Computed Tomography: Model Development, Validation, and Clinical Application.
- Author
-
Chen, Binjun, Li, Yike, Sun, Yu, Sun, Haojie, Wang, Yanmei, Lyu, Jihan, Guo, Jiajie, Bao, Shunxing, Cheng, Yushu, Niu, Xun, Yang, Lian, Xu, Jianghong, Yang, Juanmei, Huang, Yibo, Chi, Fanglu, Liang, Bo, and Ren, Dongdong
- Subjects
ARTIFICIAL intelligence ,MACHINE learning ,RECEIVER operating characteristic curves ,TEMPORAL bone ,CONVOLUTIONAL neural networks - Abstract
Background: Temporal bone computed tomography (CT) helps diagnose chronic otitis media (COM). However, its interpretation requires training and expertise. Artificial intelligence (AI) can help clinicians evaluate COM through CT scans, but existing models lack transparency and may not fully leverage multidimensional diagnostic information. Objective: We aimed to develop an explainable AI system based on 3D convolutional neural networks (CNNs) for automatic CT-based evaluation of COM. Methods: Temporal bone CT scans were retrospectively obtained from patients operated for COM between December 2015 and July 2021 at 2 independent institutes. A region of interest encompassing the middle ear was automatically segmented, and 3D CNNs were subsequently trained to identify pathological ears and cholesteatoma. An ablation study was performed to refine model architecture. Benchmark tests were conducted against a baseline 2D model and 7 clinical experts. Model performance was measured through cross-validation and external validation. Heat maps, generated using Gradient-Weighted Class Activation Mapping, were used to highlight critical decision-making regions. Finally, the AI system was assessed with a prospective cohort to aid clinicians in preoperative COM assessment. Results: Internal and external data sets contained 1661 and 108 patients (3153 and 211 eligible ears), respectively. The 3D model exhibited decent performance with mean areas under the receiver operating characteristic curves of 0.96 (SD 0.01) and 0.93 (SD 0.01), and mean accuracies of 0.878 (SD 0.017) and 0.843 (SD 0.015), respectively, for detecting pathological ears on the 2 data sets. Similar outcomes were observed for cholesteatoma identification (mean area under the receiver operating characteristic curve 0.85, SD 0.03 and 0.83, SD 0.05; mean accuracies 0.783, SD 0.04 and 0.813, SD 0.033, respectively). The proposed 3D model achieved a commendable balance between performance and network size relative to alternative models. It significantly outperformed the 2D approach in detecting COM (P ≤.05) and exhibited a substantial gain in identifying cholesteatoma (P <.001). The model also demonstrated superior diagnostic capabilities over resident fellows and the attending otologist (P <.05), rivaling all senior clinicians in both tasks. The generated heat maps properly highlighted the middle ear and mastoid regions, aligning with human knowledge in interpreting temporal bone CT. The resulting AI system achieved an accuracy of 81.8% in generating preoperative diagnoses for 121 patients and contributed to clinical decision-making in 90.1% cases. Conclusions: We present a 3D CNN model trained to detect pathological changes and identify cholesteatoma via temporal bone CT scans. In both tasks, this model significantly outperforms the baseline 2D approach, achieving levels comparable with or surpassing those of human experts. The model also exhibits decent generalizability and enhanced comprehensibility. This AI system facilitates automatic COM assessment and shows promising viability in real-world clinical settings. These findings underscore AI's potential as a valuable aid for clinicians in COM evaluation. Trial Registration: Chinese Clinical Trial Registry ChiCTR2000036300; https://www.chictr.org.cn/showprojEN.html?proj=58685 [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Cross-scale Multi-instance Learning for Pathological Image Diagnosis
- Author
-
Deng, Ruining, Cui, Can, Remedios, Lucas W., Bao, Shunxing, Womick, R. Michael, Chiron, Sophie, Li, Jia, Roland, Joseph T., Lau, Ken S., Liu, Qi, Wilson, Keith T., Wang, Yaohong, Coburn, Lori A., Landman, Bennett A., and Huo, Yuankai
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (cs.LG) - Abstract
Analyzing high resolution whole slide images (WSIs) with regard to information across multiple scales poses a significant challenge in digital pathology. Multi-instance learning (MIL) is a common solution for working with high resolution images by classifying bags of objects (i.e. sets of smaller image patches). However, such processing is typically performed at a single scale (e.g., 20x magnification) of WSIs, disregarding the vital inter-scale information that is key to diagnoses by human pathologists. In this study, we propose a novel cross-scale MIL algorithm to explicitly aggregate inter-scale relationships into a single MIL network for pathological image diagnosis. The contribution of this paper is three-fold: (1) A novel cross-scale MIL (CS-MIL) algorithm that integrates the multi-scale information and the inter-scale relationships is proposed; (2) A toy dataset with scale-specific morphological features is created and released to examine and visualize differential cross-scale attention; (3) Superior performance on both in-house and public datasets is demonstrated by our simple cross-scale MIL strategy. The official implementation is publicly available at https://github.com/hrlblab/CS-MIL.
- Published
- 2023
19. Scaling Up 3D Kernels with Bayesian Frequency Re-parameterization for Medical Image Segmentation
- Author
-
Lee, Ho Hin, Liu, Quan, Bao, Shunxing, Yang, Qi, Yu, Xin, Cai, Leon Y., Li, Thomas, Huo, Yuankai, Koutsoukos, Xenofon, and Landman, Bennett A.
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Machine Learning (cs.LG) - Abstract
With the inspiration of vision transformers, the concept of depth-wise convolution revisits to provide a large Effective Receptive Field (ERF) using Large Kernel (LK) sizes for medical image segmentation. However, the segmentation performance might be saturated and even degraded as the kernel sizes scaled up (e.g., $21\times 21\times 21$) in a Convolutional Neural Network (CNN). We hypothesize that convolution with LK sizes is limited to maintain an optimal convergence for locality learning. While Structural Re-parameterization (SR) enhances the local convergence with small kernels in parallel, optimal small kernel branches may hinder the computational efficiency for training. In this work, we propose RepUX-Net, a pure CNN architecture with a simple large kernel block design, which competes favorably with current network state-of-the-art (SOTA) (e.g., 3D UX-Net, SwinUNETR) using 6 challenging public datasets. We derive an equivalency between kernel re-parameterization and the branch-wise variation in kernel convergence. Inspired by the spatial frequency in the human visual system, we extend to vary the kernel convergence into element-wise setting and model the spatial frequency as a Bayesian prior to re-parameterize convolutional weights during training. Specifically, a reciprocal function is leveraged to estimate a frequency-weighted value, which rescales the corresponding kernel element for stochastic gradient descent. From the experimental results, RepUX-Net consistently outperforms 3D SOTA benchmarks with internal validation (FLARE: 0.929 to 0.944), external validation (MSD: 0.901 to 0.932, KiTS: 0.815 to 0.847, LiTS: 0.933 to 0.949, TCIA: 0.736 to 0.779) and transfer learning (AMOS: 0.880 to 0.911) scenarios in Dice Score., Accepted to MICCAI 2023 (top 13.6%), both codes and pretrained models are available at: https://github.com/MASILab/RepUX-Net
- Published
- 2023
20. Influence of Cell-type Ratio on Spatially Resolved Single-cell Transcriptomes using the Tangram Algorithm: Based on Implementation on Single-Cell and MxIF Data
- Author
-
Cui, Can, Bao, Shunxing, Li, Jia, Deng, Ruining, Remedios, Lucas W., Asad, Zuhayr, Chiron, Sophie, Lau, Ken S., Wang, Yaohong, Coburn, Lori A., Wilson, Keith T., Roland, Joseph T., Landman, Bennett A., Liu, Qi, and Huo, Yuankai
- Subjects
Article - Abstract
The Tangram algorithm is a benchmarking method of aligning single-cell (sc/snRNA-seq) data to various forms of spatial data collected from the same region. With this data alignment, the annotation of the single-cell data can be projected to spatial data. However, the cell composition (cell-type ratio) of the single-cell data and spatial data might be different because of heterogeneous cell distribution. Whether the Tangram algorithm can be adapted when the two data have different cell-type ratios has not been discussed in previous works. In our practical application that maps the cell-type classification results of single-cell data to the Multiplex immunofluorescence (MxIF) spatial data, cell-type ratios were different, though they were sampled from adjacent areas. In this work, both simulation and empirical validation were conducted to quantitatively explore the impact of the mismatched cell-type ratio on the Tangram mapping in different situations. Results show that the cell-type difference has a negative influence on classification accuracy.
- Published
- 2023
21. Single Slice Thigh CT Muscle Group Segmentation with Domain Adaptation and Self-Training
- Author
-
Yang, Qi, Yu, Xin, Lee, Ho Hin, Cai, Leon Y., Xu, Kaiwen, Bao, Shunxing, Huo, Yuankai, Moore, Ann Zenobia, Makrogiannis, Sokratis, Ferrucci, Luigi, and Landman, Bennett A.
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Objective: Thigh muscle group segmentation is important for assessment of muscle anatomy, metabolic disease and aging. Many efforts have been put into quantifying muscle tissues with magnetic resonance (MR) imaging including manual annotation of individual muscles. However, leveraging publicly available annotations in MR images to achieve muscle group segmentation on single slice computed tomography (CT) thigh images is challenging. Method: We propose an unsupervised domain adaptation pipeline with self-training to transfer labels from 3D MR to single CT slice. First, we transform the image appearance from MR to CT with CycleGAN and feed the synthesized CT images to a segmenter simultaneously. Single CT slices are divided into hard and easy cohorts based on the entropy of pseudo labels inferenced by the segmenter. After refining easy cohort pseudo labels based on anatomical assumption, self-training with easy and hard splits is applied to fine tune the segmenter. Results: On 152 withheld single CT thigh images, the proposed pipeline achieved a mean Dice of 0.888(0.041) across all muscle groups including sartorius, hamstrings, quadriceps femoris and gracilis. muscles Conclusion: To our best knowledge, this is the first pipeline to achieve thigh imaging domain adaptation from MR to CT. The proposed pipeline is effective and robust in extracting muscle groups on 2D single slice CT thigh images.The container is available for public use at https://github.com/MASILab/DA_CT_muscle_seg
- Published
- 2022
22. Adaptive Contrastive Learning with Dynamic Correlation for Multi-Phase Organ Segmentation
- Author
-
Lee, Ho Hin, Tang, Yucheng, Liu, Han, Fan, Yubo, Cai, Leon Y., Yang, Qi, Yu, Xin, Bao, Shunxing, Huo, Yuankai, and Landman, Bennett A.
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Machine Learning (cs.LG) - Abstract
Recent studies have demonstrated the superior performance of introducing ``scan-wise" contrast labels into contrastive learning for multi-organ segmentation on multi-phase computed tomography (CT). However, such scan-wise labels are limited: (1) a coarse classification, which could not capture the fine-grained ``organ-wise" contrast variations across all organs; (2) the label (i.e., contrast phase) is typically manually provided, which is error-prone and may introduce manual biases of defining phases. In this paper, we propose a novel data-driven contrastive loss function that adapts the similar/dissimilar contrast relationship between samples in each minibatch at organ-level. Specifically, as variable levels of contrast exist between organs, we hypothesis that the contrast differences in the organ-level can bring additional context for defining representations in the latent space. An organ-wise contrast correlation matrix is computed with mean organ intensities under one-hot attention maps. The goal of adapting the organ-driven correlation matrix is to model variable levels of feature separability at different phases. We evaluate our proposed approach on multi-organ segmentation with both non-contrast CT (NCCT) datasets and the MICCAI 2015 BTCV Challenge contrast-enhance CT (CECT) datasets. Compared to the state-of-the-art approaches, our proposed contrastive loss yields a substantial and significant improvement of 1.41% (from 0.923 to 0.936, p-value$, 11 pages
- Published
- 2022
23. Longitudinal Variability Analysis on Low-dose Abdominal CT with Deep Learning-based Segmentation
- Author
-
Yu, Xin, Tang, Yucheng, Yang, Qi, Lee, Ho Hin, Gao, Riqiang, Bao, Shunxing, Moore, Ann Zenobia, Ferrucci, Luigi, and Landman, Bennett A.
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Metabolic health is increasingly implicated as a risk factor across conditions from cardiology to neurology, and efficiency assessment of body composition is critical to quantitatively characterizing these relationships. 2D low dose single slice computed tomography (CT) provides a high resolution, quantitative tissue map, albeit with a limited field of view. Although numerous potential analyses have been proposed in quantifying image context, there has been no comprehensive study for low-dose single slice CT longitudinal variability with automated segmentation. We studied a total of 1816 slices from 1469 subjects of Baltimore Longitudinal Study on Aging (BLSA) abdominal dataset using supervised deep learning-based segmentation and unsupervised clustering method. 300 out of 1469 subjects that have two year gap in their first two scans were pick out to evaluate longitudinal variability with measurements including intraclass correlation coefficient (ICC) and coefficient of variation (CV) in terms of tissues/organs size and mean intensity. We showed that our segmentation methods are stable in longitudinal settings with Dice ranged from 0.821 to 0.962 for thirteen target abdominal tissues structures. We observed high variability in most organ with ICC0.8. We found that the variability in organ is highly related to the cross-sectional position of the 2D slice. Our efforts pave quantitative exploration and quality control to reduce uncertainties in longitudinal analysis., 7 pages, 3 figures
- Published
- 2022
24. UNesT: Local Spatial Representation Learning with Hierarchical Transformer for Efficient Medical Segmentation
- Author
-
Yu, Xin, Yang, Qi, Zhou, Yinchi, Cai, Leon Y., Gao, Riqiang, Lee, Ho Hin, Li, Thomas, Bao, Shunxing, Xu, Zhoubing, Lasko, Thomas A., Abramson, Richard G., Zhang, Zizhao, Huo, Yuankai, Landman, Bennett A., and Tang, Yucheng
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realize global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissues structures. Inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting anatomies of 133 structures in brain, 14 organs in abdomen, 4 hierarchical components in kidney, and inter-connected kidney tumors). We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in single network, outperforms prior state-of-the-art method SLANT27 ensembled with 27 network tiles, our model performance increases the mean DSC score of the publicly available Colin and CANDI dataset from 0.7264 to 0.7444 and from 0.6968 to 0.7025, respectively., 19 pages, 17 figures. arXiv admin note: text overlap with arXiv:2203.02430
- Published
- 2022
25. Pseudo-Label Guided Multi-Contrast Generalization for Non-Contrast Organ-Aware Segmentation
- Author
-
Lee, Ho Hin, Tang, Yucheng, Gao, Riqiang, Yang, Qi, Yu, Xin, Bao, Shunxing, Terry, James G., Carr, J. Jeffrey, Huo, Yuankai, and Landman, Bennett A.
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing ,Machine Learning (cs.LG) - Abstract
Non-contrast computed tomography (NCCT) is commonly acquired for lung cancer screening, assessment of general abdominal pain or suspected renal stones, trauma evaluation, and many other indications. However, the absence of contrast limits distinguishing organ in-between boundaries. In this paper, we propose a novel unsupervised approach that leverages pairwise contrast-enhanced CT (CECT) context to compute non-contrast segmentation without ground-truth label. Unlike generative adversarial approaches, we compute the pairwise morphological context with CECT to provide teacher guidance instead of generating fake anatomical context. Additionally, we further augment the intensity correlations in 'organ-specific' settings and increase the sensitivity to organ-aware boundary. We validate our approach on multi-organ segmentation with paired non-contrast & contrast-enhanced CT scans using five-fold cross-validation. Full external validations are performed on an independent non-contrast cohort for aorta segmentation. Compared with current abdominal organs segmentation state-of-the-art in fully supervised setting, our proposed pipeline achieves a significantly higher Dice by 3.98% (internal multi-organ annotated), and 8.00% (external aorta annotated) for abdominal organs segmentation. The code and pretrained models are publicly available at https://github.com/MASILab/ContrastMix.
- Published
- 2022
26. Compound Figure Separation of Biomedical Images with Side Loss
- Author
-
Yao, Tianyuan, Qu, Chang, Liu, Quan, Deng, Ruining, Tian, Yuanhan, Xu, Jiachen, Jha, Aadarsh, Bao, Shunxing, Zhao, Mengyang, Fogo, Agnes B., Landman, Bennett A., Chang, Catie, Yang, Haichun, and Huo, Yuankai
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Unsupervised learning algorithms (e.g., self-supervised learning, auto-encoder, contrastive learning) allow deep learning models to learn effective image representations from large-scale unlabeled data. In medical image analysis, even unannotated data can be difficult to obtain for individual labs. Fortunately, national-level efforts have been made to provide efficient access to obtain biomedical image data from previous scientific publications. For instance, NIH has launched the Open-i search engine that provides a large-scale image database with free access. However, the images in scientific publications consist of a considerable amount of compound figures with subplots. To extract and curate individual subplots, many different compound figure separation approaches have been developed, especially with the recent advances in deep learning. However, previous approaches typically required resource extensive bounding box annotation to train detection models. In this paper, we propose a simple compound figure separation (SimCFS) framework that uses weak classification annotations from individual images. Our technical contribution is three-fold: (1) we introduce a new side loss that is designed for compound figure separation; (2) we introduce an intra-class image augmentation method to simulate hard cases; (3) the proposed framework enables an efficient deployment to new classes of images, without requiring resource extensive bounding box annotations. From the results, the SimCFS achieved a new state-of-the-art performance on the ImageCLEF 2016 Compound Figure Separation Database. The source code of SimCFS is made publicly available at https://github.com/hrlblab/ImageSeperation.
- Published
- 2021
27. CaCL: Class-aware Codebook Learning for Weakly Supervised Segmentation on Diffuse Image Patterns
- Author
-
Deng, Ruining, Liu, Quan, Bao, Shunxing, Jha, Aadarsh, Chang, Catie, Millis, Bryan A., Tyska, Matthew J., and Huo, Yuankai
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Weakly supervised learning has been rapidly advanced in biomedical image analysis to achieve pixel-wise labels (segmentation) from image-wise annotations (classification), as biomedical images naturally contain image-wise labels in many scenarios. The current weakly supervised learning algorithms from the computer vision community are largely designed for focal objects (e.g., dogs and cats). However, such algorithms are not optimized for diffuse patterns in biomedical imaging (e.g., stains and fluorescence in microscopy imaging). In this paper, we propose a novel class-aware codebook learning (CaCL) algorithm to perform weakly supervised learning for diffuse image patterns. Specifically, the CaCL algorithm is deployed to segment protein expressed brush border regions from histological images of human duodenum. Our contribution is three-fold: (1) we approach the weakly supervised segmentation from a novel codebook learning perspective; (2) the CaCL algorithm segments diffuse image patterns rather than focal objects; and (3) the proposed algorithm is implemented in a multi-task framework based on Vector Quantised-Variational AutoEncoder (VQ-VAE) via joint image reconstruction, classification, feature embedding, and segmentation. The experimental results show that our method achieved superior performance compared with baseline weakly supervised algorithms. The code is available at https://github.com/ddrrnn123/CaCL.
- Published
- 2020
28. Learning white matter subject‐specific segmentation from structural MRI.
- Author
-
Yang, Qi, Hansen, Colin B., Cai, Leon Y., Rheault, Francois, Lee, Ho Hin, Bao, Shunxing, Chandio, Bramsh Qamar, Williams, Owen, Resnick, Susan M., Garyfallidis, Eleftherios, Anderson, Adam W., Descoteaux, Maxime, Schilling, Kurt G., and Landman, Bennett A.
- Subjects
CONVOLUTIONAL neural networks ,MAGNETIC resonance imaging ,BRAIN anatomy ,DEEP learning ,BRAIN mapping - Abstract
Purpose: Mapping brain white matter (WM) is essential for building an understanding of brain anatomy and function. Tractography‐based methods derived from diffusion‐weighted MRI (dMRI) are the principal tools for investigating WM. These procedures rely on time‐consuming dMRI acquisitions that may not always be available, especially for legacy or time‐constrained studies. To address this problem, we aim to generate WM tracts from structural magnetic resonance imaging (MRI) image by deep learning. Methods: Following recently proposed innovations in structural anatomical segmentation, we evaluate the feasibility of training multiply spatial localized convolution neural networks to learn context from fixed spatial patches from structural MRI on standard template. We focus on six widely used dMRI tractography algorithms (TractSeg, RecoBundles, XTRACT, Tracula, automated fiber quantification (AFQ), and AFQclipped) and train 125 U‐Net models to learn these techniques from 3870 T1‐weighted images from the Baltimore Longitudinal Study of Aging, the Human Connectome Project S1200 release, and scans acquired at Vanderbilt University. Results: The proposed framework identifies fiber bundles with high agreement against tractography‐based pathways with a median Dice coefficient from 0.62 to 0.87 on a test cohort, achieving improved subject‐specific accuracy when compared to population atlas‐based methods. We demonstrate the generalizability of the proposed framework on three externally available datasets. Conclusions: We show that patch‐wise convolutional neural network can achieve robust bundle segmentation from T1w. We envision the use of this framework for visualizing the expected course of WM pathways when dMRI is not available. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Distanced LSTM: Time-Distanced Gates in Long Short-Term Memory Models for Lung Cancer Detection
- Author
-
Gao, Riqiang, Huo, Yuankai, Bao, Shunxing, Tang, Yucheng, Antic, Sanja L., Epstein, Emily S., Balar, Aneri B., Deppen, Steve, Paulson, Alexis B., Sandler, Kim L., Massion, Pierre P., and Landman, Bennett A.
- Subjects
FOS: Computer and information sciences ,Computer Vision and Pattern Recognition (cs.CV) ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Computer Vision and Pattern Recognition ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
The field of lung nodule detection and cancer prediction has been rapidly developing with the support of large public data archives. Previous studies have largely focused on cross-sectional (single) CT data. Herein, we consider longitudinal data. The Long Short-Term Memory (LSTM) model addresses learning with regularly spaced time points (i.e., equal temporal intervals). However, clinical imaging follows patient needs with often heterogeneous, irregular acquisitions. To model both regular and irregular longitudinal samples, we generalize the LSTM model with the Distanced LSTM (DLSTM) for temporally varied acquisitions. The DLSTM includes a Temporal Emphasis Model (TEM) that enables learning across regularly and irregularly sampled intervals. Briefly, (1) the time intervals between longitudinal scans are modeled explicitly, (2) temporally adjustable forget and input gates are introduced for irregular temporal sampling; and (3) the latest longitudinal scan has an additional emphasis term. We evaluate the DLSTM framework in three datasets including simulated data, 1794 National Lung Screening Trial (NLST) scans, and 1420 clinically acquired data with heterogeneous and irregular temporal accession. The experiments on the first two datasets demonstrate that our method achieves competitive performance on both simulated and regularly sampled datasets (e.g. improve LSTM from 0.6785 to 0.7085 on F1 score in NLST). In external validation of clinically and irregularly acquired data, the benchmarks achieved 0.8350 (CNN feature) and 0.8380 (LSTM) on the area under the ROC curve (AUC) score, while the proposed DLSTM achieves 0.8905., This paper is accepted by MLMI (oral), MICCAI workshop
- Published
- 2019
30. Cortical Surface Parcellation using Spherical Convolutional Neural Networks
- Author
-
Parvathaneni, Prasanna, Bao, Shunxing, Nath, Vishwesh, Woodward, Neil D., Claassen, Daniel O., Cascio, Carissa J., Zald, David H., Huo, Yuankai, Landman, Bennett A., and Lyu, Ilwoo
- Subjects
FOS: Biological sciences ,Quantitative Biology - Neurons and Cognition ,Image and Video Processing (eess.IV) ,FOS: Electrical engineering, electronic engineering, information engineering ,Neurons and Cognition (q-bio.NC) ,Electrical Engineering and Systems Science - Image and Video Processing ,Quantitative Biology - Quantitative Methods ,Quantitative Methods (q-bio.QM) - Abstract
We present cortical surface parcellation using spherical deep convolutional neural networks. Traditional multi-atlas cortical surface parcellation requires inter-subject surface registration using geometric features with high processing time on a single subject (2-3 hours). Moreover, even optimal surface registration does not necessarily produce optimal cortical parcellation as parcel boundaries are not fully matched to the geometric features. In this context, a choice of training features is important for accurate cortical parcellation. To utilize the networks efficiently, we propose cortical parcellation-specific input data from an irregular and complicated structure of cortical surfaces. To this end, we align ground-truth cortical parcel boundaries and use their resulting deformation fields to generate new pairs of deformed geometric features and parcellation maps. To extend the capability of the networks, we then smoothly morph cortical geometric features and parcellation maps using the intermediate deformation fields. We validate our method on 427 adult brains for 49 labels. The experimental results show that our method out-performs traditional multi-atlas and naive spherical U-Net approaches, while achieving full cortical parcellation in less than a minute.
- Published
- 2019
31. Field-of-view extension for brain diffusion MRI via deep generative models.
- Author
-
Gao, Chenyu, Bao, Shunxing, Kim, Michael E., Newlin, Nancy R., Kanakaraj, Praitayini, Yao, Tianyuan, Rudravaram, Gaurav, Huo, Yuankai, Moyer, Daniel, Schilling, Kurt, Kukull, Walter A., Toga, Arthur W., Archer, Derek B., Hohman, Timothy J., Landman, Bennett A., and Li, Zhiyuan
- Published
- 2024
- Full Text
- View/download PDF
32. Automated, open-source segmentation of the Hippocampus and amygdala with the open Vanderbilt archive of the temporal lobe.
- Author
-
Plassard, Andrew J., Bao, Shunxing, McHugo, Maureen, Beason-Held, Lori, Blackford, Jennifer U., Heckers, Stephan, and Landman, Bennett A.
- Subjects
- *
TEMPORAL lobe , *HIPPOCAMPUS (Brain) , *AMYGDALOID body , *ALGORITHMS , *ARCHIVES - Abstract
Examining volumetric differences of the amygdala and anterior-posterior regions of the hippocampus is important for understanding cognition and clinical disorders. However, the gold standard manual segmentation of these structures is time and labor-intensive. Automated, accurate, and reproducible techniques to segment the hippocampus and amygdala are desirable. Here, we present a hierarchical approach to multi-atlas segmentation of the hippocampus head, body and tail and the amygdala based on atlases from 195 individuals. The Open Vanderbilt Archive of the temporal Lobe (OVAL) segmentation technique outperforms the commonly used FreeSurfer, FSL FIRST, and whole-brain multi-atlas segmentation approaches for the full hippocampus and amygdala and nears or exceeds inter-rater reproducibility for segmentation of the hippocampus head, body and tail. OVAL has been released in open-source and is freely available. • Present labeling protocols for the hippocampus head, body and amygdala. • created an atlas population of 195 subjects with manually traced hippocampi and automatically segmented amygdalae • presented the OVAL algorithm which is a hierarchical approach for the the full hippocampus and amygdala segmentation [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
33. Body Part Regression With Self-Supervision.
- Author
-
Tang, Yucheng, Gao, Riqiang, Han, Shizhong, Chen, Yunqiang, Gao, Dashan, Nath, Vishwesh, Bermudez, Camilo, Savona, Michael R., Bao, Shunxing, Lyu, Ilwoo, Huo, Yuankai, and Landman, Bennett A.
- Subjects
WHOLE body imaging ,STATISTICAL learning ,COMPUTED tomography ,MESSAGE passing (Computer science) ,SUPERVISED learning - Abstract
Body part regression is a promising new technique that enables content navigation through self-supervised learning. Using this technique, the global quantitative spatial location for each axial view slice is obtained from computed tomography (CT). However, it is challenging to define a unified global coordinate system for body CT scans due to the large variabilities in image resolution, contrasts, sequences, and patient anatomy. Therefore, the widely used supervised learning approach cannot be easily deployed. To address these concerns, we propose an annotation-free method named blind-unsupervised-supervision network (BUSN). The contributions of the work are in four folds: (1) 1030 multi-center CT scans are used in developing BUSN without any manual annotation. (2) the proposed BUSN corrects the predictions from unsupervised learning and uses the corrected results as the new supervision; (3) to improve the consistency of predictions, we propose a novel neighbor message passing (NMP) scheme that is integrated with BUSN as a statistical learning based correction; and (4) we introduce a new pre-processing pipeline with inclusion of the BUSN, which is validated on 3D multi-organ segmentation. The proposed method is trained on 1,030 whole body CT scans (230,650 slices) from five datasets, as well as an independent external validation cohort with 100 scans. From the body part regression results, the proposed BUSN achieved significantly higher median R-squared score (=0.9089) than the state-of-the-art unsupervised method (=0.7153). When introducing BUSN as a preprocessing stage in volumetric segmentation, the proposed pre-processing pipeline using BUSN approach increases the total mean Dice score of the 3D abdominal multi-organ segmentation from 0.7991 to 0.8145. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
34. Phase identification for dynamic CT enhancements with generative adversarial network.
- Author
-
Tang, Yucheng, Gao, Riqiang, Lee, Ho Hin, Chen, Yunqiang, Gao, Dashan, Bermudez, Camilo, Bao, Shunxing, Huo, Yuankai, Savoie, Brent V., and Landman, Bennett A.
- Subjects
GENERATIVE adversarial networks ,COMPUTED tomography ,SPIRAL computed tomography ,INJECTIONS - Abstract
Purpose: Dynamic contrast‐enhanced computed tomography (CT) is widely used to provide dynamic tissue contrast for diagnostic investigation and vascular identification. However, the phase information of contrast injection is typically recorded manually by technicians, which introduces missing or mislabeling. Hence, imaging‐based contrast phase identification is appealing, but challenging, due to large variations among different contrast protocols, vascular dynamics, and metabolism, especially for clinically acquired CT scans. The purpose of this study is to perform imaging‐based phase identification for dynamic abdominal CT using a proposed adversarial learning framework across five representative contrast phases. Methods: A generative adversarial network (GAN) is proposed as a disentangled representation learning model. To explicitly model different contrast phases, a low dimensional common representation and a class specific code are fused in the hidden layer. Then, the low dimensional features are reconstructed following a discriminator and classifier. 36 350 slices of CT scans from 400 subjects are used to evaluate the proposed method with fivefold cross‐validation with splits on subjects. Then, 2216 slices images from 20 independent subjects are employed as independent testing data, which are evaluated using multiclass normalized confusion matrix. Results: The proposed network significantly improved correspondence (0.93) over VGG, ResNet50, StarGAN, and 3DSE with accuracy scores 0.59, 0.62, 0.72, and 0.90, respectively (P < 0.001 Stuart–Maxwell test for normalized multiclass confusion matrix). Conclusion: We show that adversarial learning for discriminator can be benefit for capturing contrast information among phases. The proposed discriminator from the disentangled network achieves promising results. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. Multi-modal imaging with specialized sequences improves accuracy of the automated subcortical grey matter segmentation.
- Author
-
Plassard, Andrew J., Bao, Shunxing, D'Haese, Pierre F., Pallavaram, Srivatsan, Claassen, Daniel O., Dawant, Benoit M., and Landman, Bennett A.
- Subjects
- *
GRAY matter (Nerve tissue) , *THALAMIC nuclei , *LIMBIC system , *GLOBUS pallidus , *SUBSTANTIA nigra , *PARKINSON'S disease , *BASAL ganglia - Abstract
The basal ganglia and limbic system, particularly the thalamus, putamen, internal and external globus pallidus, substantia nigra, and sub-thalamic nucleus, comprise a clinically relevant signal network for Parkinson's disease. In order to manually trace these structures, a combination of high-resolution and specialized sequences at 7 T are used, but it is not feasible to routinely scan clinical patients in those scanners. Targeted imaging sequences at 3 T have been presented to enhance contrast in a select group of these structures. In this work, we show that a series of atlases generated at 7 T can be used to accurately segment these structures at 3 T using a combination of standard and optimized imaging sequences, though no one approach provided the best result across all structures. In the thalamus and putamen, a median Dice Similarity Coefficient (DSC) over 0.88 and a mean surface distance <1.0 mm were achieved using a combination of T1 and an optimized inversion recovery imaging sequences. In the internal and external globus pallidus a DSC over 0.75 and a mean surface distance <1.2 mm were achieved using a combination of T1 and inversion recovery imaging sequences. In the substantia nigra and sub-thalamic nucleus a DSC of over 0.6 and a mean surface distance of <1.0 mm were achieved using the inversion recovery imaging sequence. On average, using T1 and optimized inversion recovery together significantly improved segmentation results than over individual modality (p < 0.05 Wilcoxon sign-rank test). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
36. Registration-based image enhancement improves multi-atlas segmentation of the thalamic nuclei and hippocampal subfields.
- Author
-
Bao, Shunxing, Bermudez, Camilo, Huo, Yuankai, Parvathaneni, Prasanna, Rodriguez, William, Resnick, Susan M., D'Haese, Pierre-François, McHugo, Maureen, Heckers, Stephan, Dawant, Benoit M., Lyu, Ilwoo, and Landman, Bennett A.
- Subjects
- *
THALAMIC nuclei , *IMAGE intensifiers , *IMAGE segmentation , *MAGNETIC resonance imaging , *BIG data - Abstract
Abstract Magnetic resonance imaging (MRI) is an important tool for analysis of deep brain grey matter structures. However, analysis of these structures is limited due to low intensity contrast typically found in whole brain imaging protocols. Herein, we propose a big data registration-enhancement (BDRE) technique to augment the contrast of deep brain structures using an efficient large-scale non-rigid registration strategy. Direct validation is problematic given a lack of ground truth data. Rather, we validate the usefulness and impact of BDRE for multi-atlas (MA) segmentation on two sets of structures of clinical interest: the thalamic nuclei and hippocampal subfields. The experimental design compares algorithms using T1-weighted 3 T MRI for both structures (and additional 7 T MRI for the thalamic nuclei) with an algorithm using BDRE. As baseline comparisons, a recent denoising (DN) technique and a super-resolution (SR) method are used to preprocess the original 3 T MRI. The performance of each MA segmentation is evaluated by the Dice similarity coefficient (DSC). BDRE significantly improves mean segmentation accuracy over all methods tested for both thalamic nuclei (3 T imaging: 9.1%; 7 T imaging: 15.6%; DN: 6.9%; SR: 16.2%) and hippocampal subfields (3 T T1 only: 8.7%; DN: 8.4%; SR: 8.6%). We also present DSC performance for each thalamic nucleus and hippocampal subfield and show that BDRE can help MA segmentation for individual thalamic nuclei and hippocampal subfields. This work will enable large-scale analysis of clinically relevant deep brain structures from commonly acquired T1 images. Highlights • Discover a new image enhancement method that improves the contrast within deep brain structure. • The approach is a data-driven pipeline using large volume of fast non-rigid registration. • Explore the usefulness of proposed new contrast imaging modality by multi atlas segmentation. • Target two deep brain structures: thalamic nuclei and hippocampal subfields. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
37. Splenomegaly Segmentation on Multi-Modal MRI Using Deep Convolutional Networks.
- Author
-
Huo, Yuankai, Xu, Zhoubing, Bao, Shunxing, Bermudez, Camilo, Moon, Hyeonsoo, Parvathaneni, Prasanna, Moyo, Tamara K., Savona, Michael R., Assad, Albert, Abramson, Richard G., and Landman, Bennett A.
- Subjects
IMAGE segmentation ,MAGNETIC resonance imaging ,SPLEEN diseases ,DIFFUSION magnetic resonance imaging ,ANATOMICAL variation ,SPATIAL variation ,ARTIFICIAL neural networks - Abstract
The findings of splenomegaly, abnormal enlargement of the spleen, is a non-invasive clinical biomarker for liver and spleen diseases. Automated segmentation methods are essential to efficiently quantify splenomegaly from clinically acquired abdominal magnetic resonance imaging (MRI) scans. However, the task is challenging due to: 1) large anatomical and spatial variations of splenomegaly; 2) large inter- and intra-scan intensity variations on multi-modal MRI; and 3) limited numbers of labeled splenomegaly scans. In this paper, we propose the Splenomegaly Segmentation Network (SS-Net) to introduce the deep convolutional neural network (DCNN) approaches in multi-modal MRI splenomegaly segmentation. Large convolutional kernel layers were used to address the spatial and anatomical variations, while the conditional generative adversarial networks were employed to leverage the segmentation performance of SS-Net in an end-to-end manner. A clinically acquired cohort containing both T1-weighted (T1w) and T2-weighted (T2w) MRI splenomegaly scans was used to train and evaluate the performance of multi-atlas segmentation (MAS), 2D DCNN networks, and a 3-D DCNN network. From the experimental results, the DCNN methods achieved superior performance to the state-of-the-art MAS method. The proposed SS-Net method has achieved the highest median and mean Dice scores among the investigated baseline DCNN methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
38. SynSeg-Net: Synthetic Segmentation Without Target Modality Ground Truth.
- Author
-
Huo, Yuankai, Xu, Zhoubing, Moon, Hyeonsoo, Bao, Shunxing, Assad, Albert, Moyo, Tamara K., Savona, Michael R., Abramson, Richard G., and Landman, Bennett A.
- Subjects
IMAGE segmentation ,MODAL logic ,SOURCE code ,TRUTH - Abstract
A key limitation of deep convolutional neural network (DCNN)-based image segmentation methods is the lack of generalizability. Manually traced training images are typically required when segmenting organs in a new imaging modality or from distinct disease cohort. The manual efforts can be alleviated if the manually traced images in one imaging modality (e.g., MRI) are able to train a segmentation network for another imaging modality (e.g., CT). In this paper, we propose an end-to-end synthetic segmentation network (SynSeg-Net) to train a segmentation network for a target imaging modality without having manual labels. SynSeg-Net is trained by using: 1) unpaired intensity images from source and target modalities and 2) manual labels only from source modality. SynSeg-Net is enabled by the recent advances of cycle generative adversarial networks and DCNN. We evaluate the performance of the SynSeg-Net on two experiments: 1) MRI to CT splenomegaly synthetic segmentation for abdominal images and 2) CT to MRI total intracranial volume synthetic segmentation for brain images. The proposed end-to-end approach achieved superior performance to two-stage methods. Moreover, the SynSeg-Net achieved comparable performance to the traditional segmentation network using target modality labels in certain scenarios. The source code of SynSeg-Net is publicly available. 1 https://github.com/MASILab/SynSeg-Net [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
39. Deep conditional generative model for longitudinal single-slice abdominal computed tomography harmonization.
- Author
-
Yu, Xin, Yang, Qi, Tang, Yucheng, Gao, Riqiang, Bao, Shunxing, Cai, Leon Y., Lee, Ho Hin, Huo, Yuankai, Moore, Ann Zenobia, Ferrucci, Luigi, and Landman, Bennett A.
- Published
- 2024
- Full Text
- View/download PDF
40. Single slice thigh CT muscle group segmentation with domain adaptation and self-training.
- Author
-
Yang, Qi, Yu, Xin, Lee, Ho Hin, Cai, Leon Y., Xu, Kaiwen, Bao, Shunxing, Huo, Yuankai, Moore, Ann Zenobia, Makrogiannis, Sokratis, Ferrucci, Luigi, and Landman, Bennett A.
- Published
- 2023
- Full Text
- View/download PDF
41. Predicting Crohn's disease severity in the colon using mixed cell nucleus density from pseudo labels.
- Author
-
Remedios, Lucas W., Bao, Shunxing, Kerley, Cailey I., Cai, Leon Y., Rheault, François, Deng, Ruining, Cui, Can, Chiron, Sophie, Lau, Ken S., Roland, Joseph T., Washington, Mary K., Coburn, Lori A., Wilson, Keith T., Huo, Yuankai, and Landman, Bennett A.
- Published
- 2023
- Full Text
- View/download PDF
42. Topological-preserving membrane skeleton segmentation in multiplex immunofluorescence imaging.
- Author
-
Bao, Shunxing, Cui, Can, Li, Jia, Tang, Yucheng, Lee, Ho Hin, Deng, Ruining, Remedios, Lucas W., Yu, Xin, Yang, Qi, Chiron, Sophie, Patterson, Nathan Heath, Lau, Ken S., Liu, Qi, Roland, Joseph T., Coburn, Lori A., Wilson, Keith T., Landman, Bennett A., and Huo, Yuankai
- Published
- 2023
- Full Text
- View/download PDF
43. Deep whole brain segmentation of 7T structural MRI.
- Author
-
Ramadass, Karthik, Yu, Xin, Cai, Leon Y., Tang, Yucheng, Bao, Shunxing, Kerley, Cailey I., D'Archangel, Micah A., Barquero, Laura A., Newton, Allen T., Gauthier, Isabel, McGugin, Rankin Williams, Dawant, Benoit M., Cutting, Laurie E., Huo, Yuankai, and Landman, Bennett A.
- Published
- 2023
- Full Text
- View/download PDF
44. Longitudinal variability analysis on low-dose abdominal CT with deep learning-based segmentation.
- Author
-
Yu, Xin, Tang, Yucheng, Yang, Qi, Lee, Ho Hin, Gao, Riqiang, Bao, Shunxing, Moore, Ann Zenobia, Ferrucci, Luigi, and Landman, Bennett A.
- Published
- 2023
- Full Text
- View/download PDF
45. Unsupervised registration refinement for generating unbiased eye atlas.
- Author
-
Lee, Ho Hin, Tang, Yucheng, Bao, Shunxing, Yang, Qi, Yu, Xin, Schey, Kevin L., Spraggins, Jeffery M., Huo, Yuankai, and Landman, Bennett A.
- Published
- 2023
- Full Text
- View/download PDF
46. Inpainting missing tissue in multiplexed immunofluorescence imaging.
- Author
-
Bao, Shunxing, Tang, Yucheng, Lee, Ho Hin, Gao, Riqiang, Yang, Qi, Yu, Xin, Chiron, Sophie, Coburn, Lori A., Wilson, Keith T., Roland, Joseph T., Landman, Bennett A., and Huo, Yuankai
- Published
- 2022
- Full Text
- View/download PDF
47. Label efficient segmentation of single slice thigh CT with two-stage pseudo labels.
- Author
-
Yang, Qi, Yu, Xin, Lee, Ho Hin, Tang, Yucheng, Bao, Shunxing, Gravenstein, Kristofer S., Moore, Ann Zenobia, Makrogiannis, Sokratis, Ferrucci, Luigi, and Landman, Bennett A.
- Published
- 2022
- Full Text
- View/download PDF
48. Multi-modal learning with missing data for cancer diagnosis using histopathological and genomic data.
- Author
-
Cui, Can, Asad, Zuhayr, Dean, William F., Smith, Isabelle T., Madden, Christopher, Bao, Shunxing, Landman, Bennett A., Roland, Joseph T., Coburn, Lori A., Wilson, Keith T., Zwerner, Jeffrey P., Zhao, Shilin, Wheless, Lee E., and Huo, Yuankai
- Published
- 2021
- Full Text
- View/download PDF
49. Accelerating 2D abdominal organ segmentation with active learning.
- Author
-
Yu, Xin, Tang, Yucheng, Yang, Qi, Lee, Ho Hin, Bao, Shunxing, Moore, Ann Zenobia, Ferrucci, Luigi, and Landman, Bennett A.
- Published
- 2021
- Full Text
- View/download PDF
50. Supervised deep generation of high-resolution arterial phase computed tomography kidney substructure atlas.
- Author
-
Lee, Ho Hin, Tang, Yucheng, Bao, Shunxing, Xu, Yan, Yang, Qi, Yu, Xin, Fogo, Agnes B., Harris, Raymond, de Caestecker, Mark P., Spraggins, Jeffery M., Heinrich, Mattias, Huo, Yuankai, and Landman, Bennett A.
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.