192 results
Search Results
2. Call for Papers IEEE Transactions on Medical Imaging Special Issue on Annotation-Efficient Deep Learning for Medical Imaging.
- Subjects
- *
DIAGNOSTIC imaging , *DEEP learning , *MEDICAL imaging systems , *DIGITAL Object Identifiers , *TRANSACTION systems (Computer systems) - Published
- 2020
- Full Text
- View/download PDF
3. Call for Papers IEEE Transactions on Medical Imaging Special Issue on Annotation-Efficient Deep Learning for Medical Imaging.
- Subjects
- *
DIAGNOSTIC imaging , *DEEP learning , *MEDICAL imaging systems , *DIGITAL Object Identifiers , *TRANSACTION systems (Computer systems) - Published
- 2020
- Full Text
- View/download PDF
4. Echocardiography Segmentation With Enforced Temporal Consistency.
- Author
-
Painchaud, Nathan, Duchateau, Nicolas, Bernard, Olivier, and Jodoin, Pierre-Marc
- Subjects
- *
ULTRASONIC imaging , *CONVOLUTIONAL neural networks , *ECHOCARDIOGRAPHY , *MAGNETIC resonance imaging , *CARDIAC imaging - Abstract
Convolutional neural networks (CNN) have demonstrated their ability to segment 2D cardiac ultrasound images. However, despite recent successes according to which the intra-observer variability on end-diastole and end-systole images has been reached, CNNs still struggle to leverage temporal information to provide accurate and temporally consistent segmentation maps across the whole cycle. Such consistency is required to accurately describe the cardiac function, a necessary step in diagnosing many cardiovascular diseases. In this paper, we propose a framework to learn the 2D+time apical long-axis cardiac shape such that the segmented sequences can benefit from temporal and anatomical consistency constraints. Our method is a post-processing that takes as input segmented echocardiographic sequences produced by any state-of-the-art method and processes it in two steps to (i) identify spatio-temporal inconsistencies according to the overall dynamics of the cardiac sequence and (ii) correct the inconsistencies. The identification and correction of cardiac inconsistencies relies on a constrained autoencoder trained to learn a physiologically interpretable embedding of cardiac shapes, where we can both detect and fix anomalies. We tested our framework on 98 full-cycle sequences from the CAMUS dataset, which are available alongside this paper. Our temporal regularization method not only improves the accuracy of the segmentation across the whole sequences, but also enforces temporal and anatomical consistency. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Identify Representative Samples by Conditional Random Field of Cancer Histology Images.
- Author
-
Shen, Yiqing, Shen, Dinggang, and Ke, Jing
- Subjects
- *
RANDOM fields , *CONVOLUTIONAL neural networks , *HISTOLOGY , *ACTIVE learning , *STATISTICAL sampling , *HISTOPATHOLOGY , *DEEP learning , *LIFTING & carrying (Human mechanics) - Abstract
Pathology analysis is crucial to precise cancer diagnoses and the succeeding treatment plan as well. To detect abnormality in histopathology images with prevailing patch-based convolutional neural networks (CNNs), contextual information often serves as a powerful cue. However, as whole-slide images (WSIs) are characterized by intense morphological heterogeneity and extensive tissue scale, a straightforward visual span to a larger context may not well capture the information closely associated with the focal patch. In this paper, we propose a novel pixel-offset based patch-location method to identify high-representative tissues, with a CNN backbone. Pathology Deformable Conditional Random Field (PDCRF) is proposed to learn the offsets and weights of neighboring contexts in a spatial-adaptive manner, to search for high-representative patches. A CNN structure with the localized patches as training input is then capable of consistently reaching superior classification outcomes for histology images. Overall, the proposed method has achieved state-of-the-art performance, in terms of the test classification accuracy improvement to the baseline by 1.15-2.60%, 0.78-1.78%, and 1.47-2.18% on TCGA public datasets of TCGA-STAD, TCGA-COAD, and TCGA-READ respectively. It also achieves 88.95% test accuracy and 0.920 test AUC on Camelyon 16. To show the effectiveness of the proposed framework on downstream tasks, we take a further step by incorporating an active learning model, which noticeably reduces the number of manual annotations by PDCRF to reach a parallel patch-based histology classifier. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
6. A Unified Deep Learning Framework for ssTEM Image Restoration.
- Author
-
Deng, Shiyu, Huang, Wei, Chen, Chang, Fu, Xueyang, and Xiong, Zhiwei
- Subjects
- *
IMAGE reconstruction , *DEEP learning , *ADAPTIVE optics - Abstract
Serial section transmission electron micro-scopy (ssTEM) reveals biological information at a scale of nanometer and plays an important role in the ultrastructural analysis. However, due to the imperfect preparation of biological samples, ssTEM images are usually degraded with various artifacts that greatly challenge the subsequent analysis and visualization. In this paper, we introduce a unified deep learning framework for ssTEM image restoration which addresses three main types of artifacts, i.e., Support Film Folds (SFF), Staining Precipitates (SP), and Missing Sections (MS). To achieve this goal, we first model the appearance of SFF and SP artifacts by conducting comprehensive analyses on the statistics of real degraded images, relying on which we can then simulate a large number of paired images (degraded/artifacts-free) for training a deep restoration network. Then, we design a coarse-to-fine restoration network consisting of three modules, i.e., interpolation, correction, and fusion. The interpolation module exploits the adjacent artifacts-free images for an initial restoration, while the correction module resorts to the degraded image itself to rectify the artifacts. Finally, the fusion module jointly utilizes the above two results to further improve the restoration fidelity. Experimental results on both synthetic and real test data validate the significantly improved performance of our proposed framework over existing solutions, in terms of both image restoration fidelity and neuron segmentation accuracy. To the best of our knowledge, this is the first unified deep learning framework for ssTEM image restoration from different types of artifacts. Code is available at https://github.com/sydeng99/ssTEM-restoration. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Unsupervised Tissue Segmentation via Deep Constrained Gaussian Network.
- Author
-
Nan, Yang, Tang, Peng, Zhang, Guyue, Zeng, Caihong, Liu, Zhihong, Gao, Zhifan, Zhang, Heye, and Yang, Guang
- Subjects
- *
SUPERVISED learning , *WILCOXON signed-rank test , *DEEP learning , *MACHINE learning - Abstract
Tissue segmentation is the mainstay of pathological examination, whereas the manual delineation is unduly burdensome. To assist this time-consuming and subjective manual step, researchers have devised methods to automatically segment structures in pathological images. Recently, automated machine and deep learning based methods dominate tissue segmentation research studies. However, most machine and deep learning based approaches are supervised and developed using a large number of training samples, in which the pixel-wise annotations are expensive and sometimes can be impossible to obtain. This paper introduces a novel unsupervised learning paradigm by integrating an end-to-end deep mixture model with a constrained indicator to acquire accurate semantic tissue segmentation. This constraint aims to centralise the components of deep mixture models during the calculation of the optimisation function. In so doing, the redundant or empty class issues, which are common in current unsupervised learning methods, can be greatly reduced. By validation on both public and in-house datasets, the proposed deep constrained Gaussian network achieves significantly (Wilcoxon signed-rank test) better performance (with the average Dice scores of 0.737 and 0.735, respectively) on tissue segmentation with improved stability and robustness, compared to other existing unsupervised segmentation approaches. Furthermore, the proposed method presents a similar performance (p-value >0.05) compared to the fully supervised U-Net. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. Adjusting the Ground Truth Annotations for Connectivity-Based Learning to Delineate.
- Author
-
Oner, Doruk, Kozinski, Mateusz, Citraro, Lenoardo, and Fua, Pascal
- Subjects
- *
DEEP learning , *ANNOTATIONS , *NETWORK performance - Abstract
Deep learning-based approaches to delineating 3D structure depend on accurate annotations to train the networks. Yet in practice, people, no matter how conscientious, have trouble precisely delineating in 3D and on a large scale, in part because the data is often hard to interpret visually and in part because the 3D interfaces are awkward to use. In this paper, we introduce a method that explicitly accounts for annotation inaccuracies. To this end, we treat the annotations as active contour models that can deform themselves while preserving their topology. This enables us to jointly train the network and correct potential errors in the original annotations. The result is an approach that boosts performance of deep networks trained with potentially inaccurate annotations. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Anomaly Matters: An Anomaly-Oriented Model for Medical Visual Question Answering.
- Author
-
Cong, Fuze, Xu, Shibiao, Guo, Li, and Tian, Yinbing
- Subjects
- *
DIAGNOSTIC imaging , *LEARNING strategies - Abstract
Medical images contain various abnormal regions, most of which are closely related to the lesions or diseases. The abnormality or lesion is one of the major concerns during clinical practice and therefore becomes the key in answering questions about medical images. However, the recent efforts still focus on constructing a generic Visual Question Answering framework for medical-domain tasks, which is not adequate for practical medical requirements and applications. In this paper, we present two novel medical-specific modules named multiplication anomaly sensitive module and residual anomaly sensitive module to utilize weakly supervised anomaly localization information in medical Visual Question Answering. Firstly, the proposed multiplication anomaly sensitive module designed for anomaly-related questions can mask the feature of the whole image according to the anomaly location map. Secondly, the residual anomaly sensitive module could learn a flexible anomaly feature while preserving the information of the original questioned image, which is more helpful in answering anomaly-unrelated questions. Thirdly, the transformer decoder and multi-task learning strategy are combined to further enhance the question-reasoning ability and the model generalization performance. Finally, qualitative and quantitative experiments on a variety of medical datasets exhibit the superiority of the proposed approaches compared to the state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Two-Stage Mesh Deep Learning for Automated Tooth Segmentation and Landmark Localization on 3D Intraoral Scans.
- Author
-
Wu, Tai-Hsien, Lian, Chunfeng, Lee, Sanghee, Pastewait, Matthew, Piers, Christian, Liu, Jie, Wang, Fan, Wang, Li, Chiu, Chiung-Ying, Wang, Wenchi, Jackson, Christina, Chao, Wei-Lun, Shen, Dinggang, and Ko, Ching-Chang
- Subjects
- *
DEEP learning , *TEETH , *CORRECTIVE orthodontics , *ORTHODONTISTS - Abstract
Accurately segmenting teeth and identifying the corresponding anatomical landmarks on dental mesh models are essential in computer-aided orthodontic treatment. Manually performing these two tasks is time-consuming, tedious, and, more importantly, highly dependent on orthodontists’ experiences due to the abnormality and large-scale variance of patients’ teeth. Some machine learning-based methods have been designed and applied in the orthodontic field to automatically segment dental meshes (e.g., intraoral scans). In contrast, the number of studies on tooth landmark localization is still limited. This paper proposes a two-stage framework based on mesh deep learning (called TS-MDL) for joint tooth labeling and landmark identification on raw intraoral scans. Our TS-MDL first adopts an end-to-end iMeshSegNet method (i.e., a variant of the existing MeshSegNet with both improved accuracy and efficiency) to label each tooth on the downsampled scan. Guided by the segmentation outputs, our TS-MDL further selects each tooth’s region of interest (ROI) on the original mesh to construct a light-weight variant of the pioneering PointNet (i.e., PointNet-Reg) for regressing the corresponding landmark heatmaps. Our TS-MDL was evaluated on a real-clinical dataset, showing promising segmentation and localization performance. Specifically, iMeshSegNet in the first stage of TS-MDL reached an averaged Dice similarity coefficient (DSC) at ${0.964}\pm {0.054}$ , significantly outperforming the original MeshSegNet. In the second stage, PointNet-Reg achieved a mean absolute error (MAE) of ${0}.{597}\pm {0}.{761} \, mm$ in distances between the prediction and ground truth for 66 landmarks, which is superior compared with other networks for landmark detection. All these results suggest the potential usage of our TS-MDL in orthodontics. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Adversarial Evolving Neural Network for Longitudinal Knee Osteoarthritis Prediction.
- Author
-
Hu, Kun, Wu, Wenhua, Li, Wei, Simic, Milena, Zomaya, Albert, and Wang, Zhiyong
- Subjects
- *
KNEE osteoarthritis , *DEEP learning , *JOINT diseases , *IMAGE representation , *OSTEOARTHRITIS , *X-ray imaging - Abstract
Knee osteoarthritis (KOA) as a disabling joint disease has doubled in prevalence since the mid-20th century. Early diagnosis for the longitudinal KOA grades has been increasingly important for effective monitoring and intervention. Although recent studies have achieved promising performance for baseline KOA grading, longitudinal KOA grading has been seldom studied and the KOA domain knowledge has not been well explored yet. In this paper, a novel deep learning architecture, namely adversarial evolving neural network (A-ENN), is proposed for longitudinal grading of KOA severity. As the disease progresses from mild to severe level, ENN involves the progression patterns for accurately characterizing the disease by comparing an input image it to the template images of different KL grades using convolution and deconvolution computations. In addition, an adversarial training scheme with a discriminator is developed to obtain the evolution traces. Thus, the evolution traces as fine-grained domain knowledge are further fused with the general convolutional image representations for longitudinal grading. Note that ENN can be applied to other learning tasks together with existing deep architectures, in which the responses characterize progressive representations. Comprehensive experiments on the Osteoarthritis Initiative (OAI) dataset were conducted to evaluate the proposed method. An overall accuracy was achieved as 62.7%, with the baseline, 12-month, 24-month, 36-month, and 48-month accuracy as 64.6%, 63.9%, 63.2%, 61.8% and 60.2%, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. SSIS-Seg: Simulation-Supervised Image Synthesis for Surgical Instrument Segmentation.
- Author
-
Colleoni, Emanuele, Psychogyios, Dimitris, Van Amsterdam, Beatrice, Vasconcelos, Francisco, and Stoyanov, Danail
- Subjects
- *
SURGICAL instruments , *DEEP learning , *COST functions , *IMAGE segmentation , *INDUSTRIAL robots , *SURGICAL robots - Abstract
Surgical instrument segmentation can be used in a range of computer assisted interventions and automation in surgical robotics. While deep learning architectures have rapidly advanced the robustness and performance of segmentation models, most are still reliant on supervision and large quantities of labelled data. In this paper, we present a novel method for surgical image generation that can fuse robotic instrument simulation and recent domain adaptation techniques to synthesize artificial surgical images to train surgical instrument segmentation models. We integrate attention modules into well established image generation pipelines and propose a novel cost function to support supervision from simulation frames in model training. We provide an extensive evaluation of our method in terms of segmentation performance along with a validation study on image quality using evaluation metrics. Additionally, we release a novel segmentation dataset from real surgeries that will be shared for research purposes. Both binary and semantic segmentation have been considered, and we show the capability of our synthetic images to train segmentation models compared with the latest methods from the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. AutoImplant 2020-First MICCAI Challenge on Automatic Cranial Implant Design.
- Author
-
Li, Jianning, Pimentel, Pedro, Szengel, Angelika, Ehlke, Moritz, Lamecker, Hans, Zachow, Stefan, Estacio, Laura, Doenitz, Christian, Ramm, Heiko, Shi, Haochen, Chen, Xiaojun, Matzkin, Franco, Newcombe, Virginia, Ferrante, Enzo, Jin, Yuan, Ellis, David G., Aizenberg, Michele R., Kodym, Oldrich, Spanel, Michal, and Herout, Adam
- Subjects
- *
DEEP learning , *IMAGE reconstruction , *NEUROSURGEONS - Abstract
The aim of this paper is to provide a comprehensive overview of the MICCAI 2020 AutoImplant Challenge. The approaches and publications submitted and accepted within the challenge will be summarized and reported, highlighting common algorithmic trends and algorithmic diversity. Furthermore, the evaluation results will be presented, compared and discussed in regard to the challenge aim: seeking for low cost, fast and fully automated solutions for cranial implant design. Based on feedback from collaborating neurosurgeons, this paper concludes by stating open issues and post-challenge requirements for intra-operative use. The codes can be found at https://github.com/Jianningli/tmi. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy Minimization for Multi-Modal Cardiac Image Segmentation.
- Author
-
Vesal, Sulaiman, Gu, Mingxuan, Kosti, Ronak, Maier, Andreas, and Ravikumar, Nishant
- Subjects
- *
IMAGE segmentation , *CARDIAC imaging , *ENTROPY , *DATA distribution , *DEEP learning , *MAGNETIC resonance imaging - Abstract
Deep learning models are sensitive to domain shift phenomena. A model trained on images from one domain cannot generalise well when tested on images from a different domain, despite capturing similar anatomical structures. It is mainly because the data distribution between the two domains is different. Moreover, creating annotation for every new modality is a tedious and time-consuming task, which also suffers from high inter- and intra- observer variability. Unsupervised domain adaptation (UDA) methods intend to reduce the gap between source and target domains by leveraging source domain labelled data to generate labels for the target domain. However, current state-of-the-art (SOTA) UDA methods demonstrate degraded performance when there is insufficient data in source and target domains. In this paper, we present a novel UDA method for multi-modal cardiac image segmentation. The proposed method is based on adversarial learning and adapts network features between source and target domain in different spaces. The paper introduces an end-to-end framework that integrates: a) entropy minimization, b) output feature space alignment and c) a novel point-cloud shape adaptation based on the latent features learned by the segmentation model. We validated our method on two cardiac datasets by adapting from the annotated source domain, bSSFP-MRI (balanced Steady-State Free Procession-MRI), to the unannotated target domain, LGE-MRI (Late-gadolinium enhance-MRI), for the multi-sequence dataset; and from MRI (source) to CT (target) for the cross-modality dataset. The results highlighted that by enforcing adversarial learning in different parts of the network, the proposed method delivered promising performance, compared to other SOTA methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. ADAM Challenge: Detecting Age-Related Macular Degeneration From Fundus Images.
- Author
-
Fang, Huihui, Li, Fei, Fu, Huazhu, Sun, Xu, Cao, Xingxing, Lin, Fengbin, Son, Jaemin, Kim, Sunho, Quellec, Gwenole, Matta, Sarah, Shankaranarayana, Sharath M., Chen, Yi-Ting, Wang, Chuen-Heng, Shah, Nisarg A., Lee, Chia-Yen, Hsu, Chih-Chung, Xie, Hai, Lei, Baiying, Baid, Ujjwal, and Innani, Shubham
- Subjects
- *
MACULAR degeneration , *MACHINE learning , *RETINAL diseases , *OPTIC disc , *LOW vision , *DEEP learning - Abstract
Age-related macular degeneration (AMD) is the leading cause of visual impairment among elderly in the world. Early detection of AMD is of great importance, as the vision loss caused by this disease is irreversible and permanent. Color fundus photography is the most cost-effective imaging modality to screen for retinal disorders. Cutting edge deep learning based algorithms have been recently developed for automatically detecting AMD from fundus images. However, there are still lack of a comprehensive annotated dataset and standard evaluation benchmarks. To deal with this issue, we set up the Automatic Detection challenge on Age-related Macular degeneration (ADAM), which was held as a satellite event of the ISBI 2020 conference. The ADAM challenge consisted of four tasks which cover the main aspects of detecting and characterizing AMD from fundus images, including detection of AMD, detection and segmentation of optic disc, localization of fovea, and detection and segmentation of lesions. As part of the ADAM challenge, we have released a comprehensive dataset of 1200 fundus images with AMD diagnostic labels, pixel-wise segmentation masks for both optic disc and AMD-related lesions (drusen, exudates, hemorrhages and scars, among others), as well as the coordinates corresponding to the location of the macular fovea. A uniform evaluation framework has been built to make a fair comparison of different models using this dataset. During the ADAM challenge, 610 results were submitted for online evaluation, with 11 teams finally participating in the onsite challenge. This paper introduces the challenge, the dataset and the evaluation methods, as well as summarizes the participating methods and analyzes their results for each task. In particular, we observed that the ensembling strategy and the incorporation of clinical domain knowledge were the key to improve the performance of the deep learning models. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Sam’s Net: A Self-Augmented Multistage Deep-Learning Network for End-to-End Reconstruction of Limited Angle CT.
- Author
-
Chen, Changyu, Xing, Yuxiang, Gao, Hewei, Zhang, Li, and Chen, Zhiqiang
- Subjects
- *
IMAGE reconstruction algorithms , *DEEP learning , *COMPUTED tomography , *DATA distribution , *ANGLES , *ARTIFICIAL neural networks - Abstract
Limited angle reconstruction is a typical ill-posed problem in computed tomography (CT). Given incomplete projection data, images reconstructed by conventional analytical algorithms and iterative methods suffer from severe structural distortions and artifacts. In this paper, we proposed a self-augmented multi-stage deep-learning network (Sam’s Net) for end-to-end reconstruction of limited angle CT. With the merit of the alternating minimization technique, Sam’s Net integrates multi-stage self-constraints into cross-domain optimization to provide additional constraints on the manifold of neural networks. In practice, a sinogram completion network (SCNet) and artifact suppression network (ASNet), together with domain transformation layers constitute the backbone for cross-domain optimization. An online self-augmentation module was designed following the manner defined by alternating minimization, which enables a self-augmented learning procedure and multi-stage inference manner. Besides, a substitution operation was applied as a hard constraint for the solution space based on the data fidelity and a learnable weighting layer was constructed for data consistency refinement. Sam’s Net forms a new framework for ill-posed reconstruction problems. In the training phase, the self-augmented procedure guides the optimization into a tightened solution space with enriched diverse data distribution and enhanced data consistency. In the inference phase, multi-stage prediction can improve performance progressively. Extensive experiments with both simulated and practical projections under 90-degree and 120-degree fan-beam configurations validate that Sam’s Net can significantly improve the reconstruction quality with high stability and robustness. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. Localization of Craniomaxillofacial Landmarks on CBCT Images Using 3D Mask R-CNN and Local Dependency Learning.
- Author
-
Lang, Yankun, Lian, Chunfeng, Xiao, Deqiang, Deng, Hannah, Thung, Kim-Han, Yuan, Peng, Gateno, Jaime, Kuang, Tianshu, Alfi, David M., Wang, Li, Shen, Dinggang, Xia, James J., and Yap, Pew-Thian
- Subjects
- *
CONE beam computed tomography , *THREE-dimensional imaging , *DEEP learning - Abstract
Cephalometric analysis relies on accurate detection of craniomaxillofacial (CMF) landmarks from cone-beam computed tomography (CBCT) images. However, due to the complexity of CMF bony structures, it is difficult to localize landmarks efficiently and accurately. In this paper, we propose a deep learning framework to tackle this challenge by jointly digitalizing 105 CMF landmarks on CBCT images. By explicitly learning the local geometrical relationships between the landmarks, our approach extends Mask R-CNN for end-to-end prediction of landmark locations. Specifically, we first apply a detection network on a down-sampled 3D image to leverage global contextual information to predict the approximate locations of the landmarks. We subsequently leverage local information provided by higher-resolution image patches to refine the landmark locations. On patients with varying non-syndromic jaw deformities, our method achieves an average detection accuracy of 1.38± 0.95mm, outperforming a related state-of-the-art method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. Self-Training Strategy Based on Finite Element Method for Adaptive Bioluminescence Tomography Reconstruction.
- Author
-
Zhang, Xuanxuan, Cao, Xu, Zhang, Peng, Song, Fan, Zhang, Jiulou, Zhang, Lin, and Zhang, Guanglei
- Subjects
- *
FINITE element method , *OPTICAL tomography , *BIOLUMINESCENCE , *TOMOGRAPHY , *VISIBLE spectra , *RANDOM forest algorithms - Abstract
Bioluminescence tomography (BLT) is a promising pre-clinical imaging technique for a wide variety of biomedical applications, which can non-invasively reveal functional activities inside living animal bodies through the detection of visible or near-infrared light produced by bioluminescent reactions. Recently, reconstruction approaches based on deep learning have shown great potential in optical tomography modalities. However, these reports only generate data with stationary patterns of constant target number, shape, and size. The neural networks trained by these data sets are difficult to reconstruct the patterns outside the data sets. This will tremendously restrict the applications of deep learning in optical tomography reconstruction. To address this problem, a self-training strategy is proposed for BLT reconstruction in this paper. The proposed strategy can fast generate large-scale BLT data sets with random target numbers, shapes, and sizes through an algorithm named random seed growth algorithm and the neural network is automatically self-trained. In addition, the proposed strategy uses the neural network to build a map between photon densities on surface and inside the imaged object rather than an end-to-end neural network that directly infers the distribution of sources from the photon density on surface. The map of photon density is further converted into the distribution of sources through the multiplication with stiffness matrix. Simulation, phantom, and mouse studies are carried out. Results show the availability of the proposed self-training strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Learning From Synthetic CT Images via Test-Time Training for Liver Tumor Segmentation.
- Author
-
Lyu, Fei, Ye, Mang, Ma, Andy J., Yip, Terry Cheuk-Fung, Wong, Grace Lai-Hung, and Yuen, Pong C.
- Subjects
- *
LIVER tumors , *COMPUTED tomography , *DEEP learning , *TUMOR diagnosis , *IMAGE segmentation - Abstract
Automatic liver tumor segmentation could offer assistance to radiologists in liver tumor diagnosis, and its performance has been significantly improved by recent deep learning based methods. These methods rely on large-scale well-annotated training datasets, but collecting such datasets is time-consuming and labor-intensive, which could hinder their performance in practical situations. Learning from synthetic data is an encouraging solution to address this problem. In our task, synthetic tumors can be injected to healthy images to form training pairs. However, directly applying the model trained using the synthetic tumor images on real test images performs poorly due to the domain shift problem. In this paper, we propose a novel approach, namely Synthetic-to-Real Test-Time Training (SR-TTT), to reduce the domain gap between synthetic training images and real test images. Specifically, we add a self-supervised auxiliary task, i.e., two-step reconstruction, which takes the output of the main segmentation task as its input to build an explicit connection between these two tasks. Moreover, we design a scheduled mixture strategy to avoid error accumulation and bias explosion in the training process. During test time, we adapt the segmentation model to each test image with self-supervision from the auxiliary task so as to improve the inference performance. The proposed method is extensively evaluated on two public datasets for liver tumor segmentation. The experimental results demonstrate that our proposed SR-TTT can effectively mitigate the synthetic-to-real domain shift problem in the liver tumor segmentation task, and is superior to existing state-of-the-art approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. Multimodal MRI Reconstruction Assisted With Spatial Alignment Network.
- Author
-
Xuan, Kai, Xiang, Lei, Huang, Xiaoqian, Zhang, Lichi, Liao, Shu, Shen, Dinggang, and Wang, Qian
- Subjects
- *
MAGNETIC resonance imaging - Abstract
In clinical practice, multi-modal magnetic resonance imaging (MRI) with different contrasts is usually acquired in a single study to assess different properties of the same region of interest in the human body. The whole acquisition process can be accelerated by having one or more modalities under-sampled in the ${k}$ -space. Recent research has shown that, considering the redundancy between different modalities, a target MRI modality under-sampled in the ${k}$ -space can be more efficiently reconstructed with a fully-sampled reference MRI modality. However, we find that the performance of the aforementioned multi-modal reconstruction can be negatively affected by subtle spatial misalignment between different modalities, which is actually common in clinical practice. In this paper, we improve the quality of multi-modal reconstruction by compensating for such spatial misalignment with a spatial alignment network. First, our spatial alignment network estimates the displacement between the fully-sampled reference and the under-sampled target images, and warps the reference image accordingly. Then, the aligned fully-sampled reference image joins the multi-modal reconstruction of the under-sampled target image. Also, considering the contrast difference between the target and reference images, we have designed a cross-modality-synthesis-based registration loss in combination with the reconstruction loss, to jointly train the spatial alignment network and the reconstruction network. The experiments on both clinical MRI and multi-coil ${k}$ -space raw data demonstrate the superiority and robustness of the multi-modal MRI reconstruction empowered with our spatial alignment network. Our code is publicly available at https://github.com/woxuankai/SpatialAlignmentNetwork. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Global and Local Feature Reconstruction for Medical Image Segmentation.
- Author
-
Song, Jiahuan, Chen, Xinjian, Zhu, Qianlong, Shi, Fei, Xiang, Dehui, Chen, Zhongyue, Fan, Ying, Pan, Lingjiao, and Zhu, Weifang
- Subjects
- *
COMPUTER-assisted image analysis (Medicine) , *IMAGE reconstruction , *DIAGNOSTIC imaging , *FEATURE extraction , *SPATIAL ability , *FUZZY algorithms , *IMAGE segmentation - Abstract
Learning how to capture long-range dependencies and restore spatial information of down-sampled feature maps are the basis of the encoder-decoder structure networks in medical image segmentation. U-Net based methods use feature fusion to alleviate these two problems, but the global feature extraction ability and spatial information recovery ability of U-Net are still insufficient. In this paper, we propose a Global Feature Reconstruction (GFR) module to efficiently capture global context features and a Local Feature Reconstruction (LFR) module to dynamically up-sample features, respectively. For the GFR module, we first extract the global features with category representation from the feature map, then use the different level global features to reconstruct features at each location. The GFR module establishes a connection for each pair of feature elements in the entire space from a global perspective and transfers semantic information from the deep layers to the shallow layers. For the LFR module, we use low-level feature maps to guide the up-sampling process of high-level feature maps. Specifically, we use local neighborhoods to reconstruct features to achieve the transfer of spatial information. Based on the encoder-decoder architecture, we propose a Global and Local Feature Reconstruction Network (GLFRNet), in which the GFR modules are applied as skip connections and the LFR modules constitute the decoder path. The proposed GLFRNet is applied to four different medical image segmentation tasks and achieves state-of-the-art performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. Deformation-Compensated Learning for Image Reconstruction Without Ground Truth.
- Author
-
Gan, Weijie, Sun, Yu, Eldeniz, Cihat, Liu, Jiaming, An, Hongyu, and Kamilov, Ulugbek S.
- Subjects
- *
ARTIFICIAL neural networks , *MAGNETIC resonance imaging , *CONVOLUTIONAL neural networks , *DIAGNOSTIC imaging , *IMAGE reconstruction - Abstract
Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on Noise2Noise (N2N) has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. However, existing N2N-based methods are not suitable for learning from the measurements of an object undergoing nonrigid deformation. This paper addresses this issue by proposing the deformation-compensated learning (DeCoLearn) method for training deep reconstruction networks by compensating for object deformations. A key component of DeCoLearn is a deep registration module, which is jointly trained with the deep reconstruction network without any ground-truth supervision. We validate DeCoLearn on both simulated and experimentally collected magnetic resonance imaging (MRI) data and show that it significantly improves imaging quality. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Deep Relation Learning for Regression and Its Application to Brain Age Estimation.
- Author
-
He, Sheng, Feng, Yanfang, Grant, P. Ellen, and Ou, Yangming
- Subjects
- *
ARTIFICIAL neural networks , *DEEP learning , *CONVOLUTIONAL neural networks , *FEATURE extraction - Abstract
Most deep learning models for temporal regression directly output the estimation based on single input images, ignoring the relationships between different images. In this paper, we propose deep relation learning for regression, aiming to learn different relations between a pair of input images. Four non-linear relations are considered: “cumulative relation,” “relative relation,” “maximal relation” and “minimal relation.” These four relations are learned simultaneously from one deep neural network which has two parts: feature extraction and relation regression. We use an efficient convolutional neural network to extract deep features from the pair of input images and apply a Transformer for relation learning. The proposed method is evaluated on a merged dataset with 6,049 subjects with ages of 0–97 years using 5-fold cross-validation for the task of brain age estimation. The experimental results have shown that the proposed method achieved a mean absolute error (MAE) of 2.38 years, which is lower than the MAEs of 8 other state-of-the-art algorithms with statistical significance (p<0.05) in paired T-test (two-side). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
24. PDBL: Improving Histopathological Tissue Classification With Plug-and-Play Pyramidal Deep-Broad Learning.
- Author
-
Lin, Jiatai, Han, Guoqiang, Pan, Xipeng, Liu, Zaiyi, Chen, Hao, Li, Danyi, Jia, Xiping, Shi, Zhenwei, Wang, Zhizhen, Cui, Yanfen, Li, Haiming, Liang, Changhong, Liang, Li, Wang, Ying, and Han, Chu
- Subjects
- *
DEEP learning , *COMPUTER vision , *HISTOPATHOLOGY , *TISSUES , *SOURCE code , *CLASSIFICATION - Abstract
Histopathological tissue classification is a simpler way to achieve semantic segmentation for the whole slide images, which can alleviate the requirement of pixel-level dense annotations. Existing works mostly leverage the popular CNN classification backbones in computer vision to achieve histopathological tissue classification. In this paper, we propose a super lightweight plug-and-play module, named Pyramidal Deep-Broad Learning (PDBL), for any well-trained classification backbone to improve the classification performance without a re-training burden. For each patch, we construct a multi-resolution image pyramid to obtain the pyramidal contextual information. For each level in the pyramid, we extract the multi-scale deep-broad features by our proposed Deep-Broad block (DB-block). We equip PDBL in three popular classification backbones, ShuffLeNetV2, EfficientNetb0, and ResNet50 to evaluate the effectiveness and efficiency of our proposed module on two datasets (Kather Multiclass Dataset and the LC25000 Dataset). Experimental results demonstrate the proposed PDBL can steadily improve the tissue-level classification performance for any CNN backbones, especially for the lightweight models when given a small among of training samples (less than 10%). It greatly saves the computational resources and annotation efforts. The source code is available at: https://github.com/linjiatai/PDBL. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. DeepGrading: Deep Learning Grading of Corneal Nerve Tortuosity.
- Author
-
Mou, Lei, Qi, Hong, Liu, Yonghuai, Zheng, Yalin, Matthew, Peter, Su, Pan, Liu, Jiang, Zhang, Jiong, and Zhao, Yitian
- Subjects
- *
DEEP learning , *TORTUOSITY , *CORNEA , *NERVE fibers , *NERVES , *FEATURE extraction - Abstract
Accurate estimation and quantification of the corneal nerve fiber tortuosity in corneal confocal microscopy (CCM) is of great importance for disease understanding and clinical decision-making. However, the grading of corneal nerve tortuosity remains a great challenge due to the lack of agreements on the definition and quantification of tortuosity. In this paper, we propose a fully automated deep learning method that performs image-level tortuosity grading of corneal nerves, which is based on CCM images and segmented corneal nerves to further improve the grading accuracy with interpretability principles. The proposed method consists of two stages: 1) A pre-trained feature extraction backbone over ImageNet is fine-tuned with a proposed novel bilinear attention (BA) module for the prediction of the regions of interest (ROIs) and coarse grading of the image. The BA module enhances the ability of the network to model long-range dependencies and global contexts of nerve fibers by capturing second-order statistics of high-level features. 2) An auxiliary tortuosity grading network (AuxNet) is proposed to obtain an auxiliary grading over the identified ROIs, enabling the coarse and additional gradings to be finally fused together for more accurate final results. The experimental results show that our method surpasses existing methods in tortuosity grading, and achieves an overall accuracy of 85.64% in four-level classification. We also validate it over a clinical dataset, and the statistical analysis demonstrates a significant difference of tortuosity levels between healthy control and diabetes group. We have released a dataset with 1500 CCM images and their manual annotations of four tortuosity levels for public access. The code is available at: https://github.com/iMED-Lab/TortuosityGrading. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. Pyramid Convolutional RNN for MRI Image Reconstruction.
- Author
-
Chen, Eric Z., Wang, Puyang, Chen, Xiao, Chen, Terrence, and Sun, Shanhui
- Subjects
- *
MAGNETIC resonance imaging , *PYRAMIDS , *KNEE , *DEEP learning - Abstract
Fast and accurate MRI image reconstruction from undersampled data is crucial in clinical practice. Deep learning based reconstruction methods have shown promising advances in recent years. However, recovering fine details from undersampled data is still challenging. In this paper, we introduce a novel deep learning based method, Pyramid Convolutional RNN (PC-RNN), to reconstruct images from multiple scales. Based on the formulation of MRI reconstruction as an inverse problem, we design the PC-RNN model with three convolutional RNN (ConvRNN) modules to iteratively learn the features in multiple scales. Each ConvRNN module reconstructs images at different scales and the reconstructed images are combined by a final CNN module in a pyramid fashion. The multi-scale ConvRNN modules learn a coarse-to-fine image reconstruction. Unlike other common reconstruction methods for parallel imaging, PC-RNN does not employ coil sensitive maps for multi-coil data and directly model the multiple coils as multi-channel inputs. The coil compression technique is applied to standardize data with various coil numbers, leading to more efficient training. We evaluate our model on the fastMRI knee and brain datasets and the results show that the proposed model outperforms other methods and can recover more details. The proposed method is one of the winner solutions in the 2019 fastMRI competition. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Sampling Possible Reconstructions of Undersampled Acquisitions in MR Imaging With a Deep Learned Prior.
- Author
-
Tezcan, Kerem C., Karani, Neerav, Baumgartner, Christian F., and Konukoglu, Ender
- Subjects
- *
MAGNETIC resonance imaging , *TIKHONOV regularization , *PHASE coding , *IMAGE reconstruction , *ACQUISITION of data - Abstract
Undersampling the k-space during MR acquisitions saves time, however results in an ill-posed inversion problem, leading to an infinite set of images as possible solutions. Traditionally, this is tackled as a reconstruction problem by searching for a single “best” image out of this solution set according to some chosen regularization or prior. This approach, however, misses the possibility of other solutions and hence ignores the uncertainty in the inversion process. In this paper, we propose a method that instead returns multiple images which are possible under the acquisition model and the chosen prior to capture the uncertainty in the inversion process. To this end, we introduce a low dimensional latent space and model the posterior distribution of the latent vectors given the acquisition data in k-space, from which we can sample in the latent space and obtain the corresponding images. We use a variational autoencoder for the latent model and the Metropolis adjusted Langevin algorithm for the sampling. We evaluate our method on two datasets; with images from the Human Connectome Project and in-house measured multi-coil images. We compare to five alternative methods. Results indicate that the proposed method produces images that match the measured k-space data better than the alternatives, while showing realistic structural variability. Furthermore, in contrast to the compared methods, the proposed method yields higher uncertainty in the undersampled phase encoding direction, as expected. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Segmentation of the True Lumen of Aorta Dissection via Morphology-Constrained Stepwise Deep Mesh Regression.
- Author
-
Zhao, Jingliang, Zhao, Jie, Pang, Shumao, and Feng, Qianjin
- Subjects
- *
AORTIC dissection , *MNEMONICS , *AORTA - Abstract
The lumen of aortic dissection (AD) has important clinical value for preoperative diagnosis, interoperative intervention, and post-operative evaluation of AD diseases. AD segmentation is challenging because (i) fitting its irregular profile by using traditional models is difficult, and (ii) the size of the AD image is usually so big that many algorithms have to perform down-sampling to reduce the computational burden, thereby reducing the resolution of the result. In this paper, an automatic AD segmentation algorithm, in which a 3D mesh is gradually moved to the surface of AD based on the offset estimated by a deep mesh deformation module, is presented. AD morphology is used to constrain the initial mesh and guide the deformation, which improves the efficiency of the deep network and avoids down-sampling. Moreover, a stepwise regression strategy is introduced to solve the mesh folding problem and improve the uniformity of the mesh points. On an AD database that involves 35 images, the proposed method obtains the mean Dice of 94.12% and symmetric 95% Hausdorff distance of 2.85 mm, which outperforms five state-of-the-art AD segmentation methods. The average processing time is 16.6 s, and the memory used to train the network is only 0.36 GB, indicating that this method is easy to apply in clinical practice. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Learning From Ambiguous Labels for Lung Nodule Malignancy Prediction.
- Author
-
Liao, Zehui, Xie, Yutong, Hu, Shishuai, and Xia, Yong
- Subjects
- *
PULMONARY nodules , *LUNGS , *COMPUTED tomography , *DEEP learning , *IMAGE representation , *LUNG cancer - Abstract
Lung nodule malignancy prediction is an essential step in the early diagnosis of lung cancer. Besides the difficulties commonly discussed, the challenges of this task also come from the ambiguous labels provided by annotators, since deep learning models have in some cases been found to reproduce or amplify human biases. In this paper, we propose a multi-view ‘divide-and-rule’ (MV-DAR) model to learn from both reliable and ambiguous annotations for lung nodule malignancy prediction on chest CT scans. According to the consistency and reliability of their annotations, we divide nodules into three sets: a consistent and reliable set (CR-Set), an inconsistent set (IC-Set), and a low reliable set (LR-Set). The nodule in IC-Set is annotated by multiple radiologists inconsistently, and the nodule in LR-Set is annotated by only one radiologist. Although ambiguous, inconsistent labels tell which label(s) is consistently excluded by all annotators, and the unreliable labels of a cohort of nodules are largely correct from the statistical point of view. Hence, both IC-Set and LR-Set can be used to facilitate the training of MV-DAR. Our MV-DAR contains three DAR models to characterize a lung nodule from three orthographic views and is trained following a two-stage procedure. Each DAR consists of three networks with the same architecture, including a prediction network (Prd-Net), a counterfactual network (CF-Net), and a low reliable network (LR-Net), which are trained on CR-Set, IC-Set, and LR-Set respectively in the pretraining phase. In the fine-tuning phase, the image representation ability learned by CF-Net and LR-Net is transferred to Prd-Net by negative-attention module (NA-Module) and consistent-attention module (CA-Module), aiming to boost the prediction ability of Prd-Net. The MV-DAR model has been evaluated on the LIDC-IDRI dataset and LUNGx dataset. Our results indicate not only the effectiveness of the MV-DAR in learning from ambiguous labels but also its superiority over present noisy label-learning models in lung nodule malignancy prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Follow My Eye: Using Gaze to Supervise Computer-Aided Diagnosis.
- Author
-
Wang, Sheng, Ouyang, Xi, Liu, Tianming, Wang, Qian, and Shen, Dinggang
- Subjects
- *
COMPUTER-aided diagnosis , *EYE movements , *COMPUTER-assisted image analysis (Medicine) , *IMAGE analysis , *EYE , *X-ray imaging , *GAZE , *KNEE - Abstract
When deep neural network (DNN) was first introduced to the medical image analysis community, researchers were impressed by its performance. However, it is evident now that a large number of manually labeled data is often a must to train a properly functioning DNN. This demand for supervision data and labels is a major bottleneck in current medical image analysis, since collecting a large number of annotations from experienced experts can be time-consuming and expensive. In this paper, we demonstrate that the eye movement of radiologists reading medical images can be a new form of supervision to train the DNN-based computer-aided diagnosis (CAD) system. Particularly, we record the tracks of the radiologists’ gaze when they are reading images. The gaze information is processed and then used to supervise the DNN’s attention via an Attention Consistency module. To the best of our knowledge, the above pipeline is among the earliest efforts to leverage expert eye movement for deep-learning-based CAD. We have conducted extensive experiments on knee X-ray images for osteoarthritis assessment. The results show that our method can achieve considerable improvement in diagnosis performance, with the help of gaze supervision. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Robust Medical Image Classification From Noisy Labeled Data With Global and Local Representation Guided Co-Training.
- Author
-
Xue, Cheng, Yu, Lequan, Chen, Pengfei, Dou, Qi, and Heng, Pheng-Ann
- Subjects
- *
MEDICAL coding , *DIAGNOSTIC imaging , *RANDOM noise theory , *IMAGE analysis , *LEARNING strategies - Abstract
Deep neural networks have achieved remarkable success in a wide variety of natural image and medical image computing tasks. However, these achievements indispensably rely on accurately annotated training data. If encountering some noisy-labeled images, the network training procedure would suffer from difficulties, leading to a sub-optimal classifier. This problem is even more severe in the medical image analysis field, as the annotation quality of medical images heavily relies on the expertise and experience of annotators. In this paper, we propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification from noisy-labeled data to combat the lack of high quality annotated medical data. Specifically, we employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples. Then, the clean samples are trained by a collaborative training strategy to eliminate the disturbance from imperfect labeled samples. Notably, we further design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples in a self-supervised manner. We evaluated our proposed robust learning strategy on four public medical image classification datasets with three types of label noise, i.e., random noise, computer-generated label noise, and inter-observer variability noise. Our method outperforms other learning from noisy label methods and we also conducted extensive experiments to analyze each component of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. Deep Diffusion MRI Registration (DDMReg): A Deep Learning Method for Diffusion MRI Registration.
- Author
-
Zhang, Fan, Wells, William M., and O'Donnell, Lauren J.
- Subjects
- *
DEEP learning , *IMAGE registration , *DIFFUSION magnetic resonance imaging , *RECORDING & registration , *OPTICAL scanners , *DIFFUSION tensor imaging , *BRAIN anatomy , *WHITE matter (Nerve tissue) - Abstract
In this paper, we present a deep learning method, DDMReg, for accurate registration between diffusion MRI (dMRI) datasets. In dMRI registration, the goal is to spatially align brain anatomical structures while ensuring that local fiber orientations remain consistent with the underlying white matter fiber tract anatomy. DDMReg is a novel method that uses joint whole-brain and tract-specific information for dMRI registration. Based on the successful VoxelMorph framework for image registration, we propose a novel registration architecture that leverages not only whole brain information but also tract-specific fiber orientation information. DDMReg is an unsupervised method for deformable registration between pairs of dMRI datasets: it does not require nonlinearly pre-registered training data or the corresponding deformation fields as ground truth. We perform comparisons with four state-of-the-art registration methods on multiple independently acquired datasets from different populations (including teenagers, young and elderly adults) and different imaging protocols and scanners. We evaluate the registration performance by assessing the ability to align anatomically corresponding brain structures and ensure fiber spatial agreement between different subjects after registration. Experimental results show that DDMReg obtains significantly improved registration performance compared to the state-of-the-art methods. Importantly, we demonstrate successful generalization of DDMReg to dMRI data from different populations with varying ages and acquired using different acquisition protocols and different scanners. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Recurrent Tissue-Aware Network for Deformable Registration of Infant Brain MR Images.
- Author
-
Wei, Dongming, Ahmad, Sahar, Guo, Yuyu, Chen, Liyun, Huang, Yunzhi, Ma, Lei, Wu, Zhengwang, Li, Gang, Wang, Li, Lin, Weili, Yap, Pew-Thian, Shen, Dinggang, and Wang, Qian
- Subjects
- *
MAGNETIC resonance imaging , *BRAIN imaging , *INFANTS , *RECORDING & registration , *IMAGE analysis - Abstract
Deformable registration is fundamental to longitudinal and population-based image analyses. However, it is challenging to precisely align longitudinal infant brain MR images of the same subject, as well as cross-sectional infant brain MR images of different subjects, due to fast brain development during infancy. In this paper, we propose a recurrently usable deep neural network for the registration of infant brain MR images. There are three main highlights of our proposed method. (i) We use brain tissue segmentation maps for registration, instead of intensity images, to tackle the issue of rapid contrast changes of brain tissues during the first year of life. (ii) A single registration network is trained in a one-shot manner, and then recurrently applied in inference for multiple times, such that the complex deformation field can be recovered incrementally. (iii) We also propose both the adaptive smoothing layer and the tissue-aware anti-folding constraint into the registration network to ensure the physiological plausibility of estimated deformations without degrading the registration accuracy. Experimental results, in comparison to the state-of-the-art registration methods, indicate that our proposed method achieves the highest registration accuracy while still preserving the smoothness of the deformation field. The implementation of our proposed registration network is available online https://github.com/Barnonewdm/ACTA-Reg-Net. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. Single Model Deep Learning on Imbalanced Small Datasets for Skin Lesion Classification.
- Author
-
Yao, Peng, Shen, Shuwei, Xu, Mengjuan, Liu, Peng, Zhang, Fan, Xing, Jinyu, Shao, Pengfei, Kaffenberger, Benjamin, and Xu, Ronald X.
- Subjects
- *
DEEP learning , *SKIN disease diagnosis , *CONVOLUTIONAL neural networks , *DIAGNOSTIC imaging , *LEARNING strategies - Abstract
Deep convolutional neural network (DCNN) models have been widely explored for skin disease diagnosis and some of them have achieved the diagnostic outcomes comparable or even superior to those of dermatologists. However, broad implementation of DCNN in skin disease detection is hindered by small size and data imbalance of the publically accessible skin lesion datasets. This paper proposes a novel single-model based strategy for classification of skin lesions on small and imbalanced datasets. First, various DCNNs are trained on different small and imbalanced datasets to verify that the models with moderate complexity outperform the larger models. Second, regularization DropOut and DropBlock are added to reduce overfitting and a Modified RandAugment augmentation strategy is proposed to deal with the defects of sample underrepresentation in the small dataset. Finally, a novel Multi-Weighted New Loss (MWNL) function and an end-to-end cumulative learning strategy (CLS) are introduced to overcome the challenge of uneven sample size and classification difficulty and to reduce the impact of abnormal samples on training. By combining Modified RandAugment, MWNL and CLS, our single DCNN model method achieved the classification accuracy comparable or superior to those of multiple ensembling models on different dermoscopic image datasets. Our study shows that this method is able to achieve a high classification performance at a low cost of computational resources and inference time, potentially suitable to implement in mobile devices for automated screening of skin lesions and many other malignancies in low resource settings. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. PathAL: An Active Learning Framework for Histopathology Image Analysis.
- Author
-
Li, Wenyuan, Li, Jiayun, Wang, Zichen, Polson, Jennifer, Sisk, Anthony E., Sajed, Dipti P., Speier, William, and Arnold, Corey W.
- Subjects
- *
IMAGE analysis , *ACTIVE learning , *SUPERVISED learning , *DEEP learning , *HISTOPATHOLOGY , *IMAGE segmentation , *PROSTATE cancer - Abstract
Deep neural networks, in particular convolutional networks, have rapidly become a popular choice for analyzing histopathology images. However, training these models relies heavily on a large number of samples manually annotated by experts, which is cumbersome and expensive. In addition, it is difficult to obtain a perfect set of labels due to the variability between expert annotations. This paper presents a novel active learning (AL) framework for histopathology image analysis, named PathAL. To reduce the required number of expert annotations, PathAL selects two groups of unlabeled data in each training iteration: one “informative” sample that requires additional expert annotation, and one “confident predictive” sample that is automatically added to the training set using the model’s pseudo-labels. To reduce the impact of the noisy-labeled samples in the training set, PathAL systematically identifies noisy samples and excludes them to improve the generalization of the model. Our model advances the existing AL method for medical image analysis in two ways. First, we present a selection strategy to improve classification performance with fewer manual annotations. Unlike traditional methods focusing only on finding the most uncertain samples with low prediction confidence, we discover a large number of high confidence samples from the unlabeled set and automatically add them for training with assigned pseudo-labels. Second, we design a method to distinguish between noisy samples and hard samples using a heuristic approach. We exclude the noisy samples while preserving the hard samples to improve model performance. Extensive experiments demonstrate that our proposed PathAL framework achieves promising results on a prostate cancer Gleason grading task, obtaining similar performance with 40% fewer annotations compared to the fully supervised learning scenario. An ablation study is provided to analyze the effectiveness of each component in PathAL, and a pathologist reader study is conducted to validate our proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. Weakly Supervised Liver Tumor Segmentation Using Couinaud Segment Annotation.
- Author
-
Lyu, Fei, Ma, Andy J., Yip, Terry Cheuk-Fung, Wong, Grace Lai-Hung, and Yuen, Pong C.
- Subjects
- *
LIVER tumors , *IMAGE segmentation , *ANNOTATIONS , *ONCOLOGISTS , *DEEP learning - Abstract
Automatic liver tumor segmentation is of great importance for assisting doctors in liver cancer diagnosis and treatment planning. Recently, deep learning approaches trained with pixel-level annotations have contributed many breakthroughs in image segmentation. However, acquiring such accurate dense annotations is time-consuming and labor-intensive, which limits the performance of deep neural networks for medical image segmentation. We note that Couinaud segment is widely used by radiologists when recording liver cancer-related findings in the reports, since it is well-suited for describing the localization of tumors. In this paper, we propose a novel approach to train convolutional networks for liver tumor segmentation using Couinaud segment annotations. Couinaud segment annotations are image-level labels with values ranging from 1 to 8, indicating a specific region of the liver. Our proposed model, namely CouinaudNet, can estimate pseudo tumor masks from the Couinaud segment annotations as pixel-wise supervision for training a fully supervised tumor segmentation model, and it is composed of two components: 1) an inpainting network with Couinaud segment masks which can effectively remove tumors for pathological images by filling the tumor regions with plausible healthy-looking intensities; 2) a difference spotting network for segmenting the tumors, which is trained with healthy-pathological pairs generated by an effective tumor synthesis strategy. The proposed method is extensively evaluated on two liver tumor segmentation datasets. The experimental results demonstrate that our method can achieve competitive performance compared to the fully supervised counterpart and the state-of-the-art methods while requiring significantly less annotation effort. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Deep-Learning-Based Automated Neuron Reconstruction From 3D Microscopy Images Using Synthetic Training Images.
- Author
-
Chen, Weixun, Liu, Min, Du, Hao, Radojevic, Miroslav, Wang, Yaonan, and Meijering, Erik
- Subjects
- *
THREE-dimensional imaging , *CONVOLUTIONAL neural networks , *NEURONS , *IMAGE reconstruction - Abstract
Digital reconstruction of neuronal structures from 3D microscopy images is critical for the quantitative investigation of brain circuits and functions. It is a challenging task that would greatly benefit from automatic neuron reconstruction methods. In this paper, we propose a novel method called SPE-DNR that combines spherical-patches extraction (SPE) and deep-learning for neuron reconstruction (DNR). Based on 2D Convolutional Neural Networks (CNNs) and the intensity distribution features extracted by SPE, it determines the tracing directions and classifies voxels into foreground or background. This way, starting from a set of seed points, it automatically traces the neurite centerlines and determines when to stop tracing. To avoid errors caused by imperfect manual reconstructions, we develop an image synthesizing scheme to generate synthetic training images with exact reconstructions. This scheme simulates 3D microscopy imaging conditions as well as structural defects, such as gaps and abrupt radii changes, to improve the visual realism of the synthetic images. To demonstrate the applicability and generalizability of SPE-DNR, we test it on 67 real 3D neuron microscopy images from three datasets. The experimental results show that the proposed SPE-DNR method is robust and competitive compared with other state-of-the-art neuron reconstruction methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. A 3D Tubular Flux Model for Centerline Extraction in Neuron Volumetric Images.
- Author
-
Wang, Xuan, Liu, Min, Wang, Yaonan, Fan, Jiawang, and Meijering, Erik
- Subjects
- *
NEURONS , *IMAGE reconstruction - Abstract
Digital morphology reconstruction from neuron volumetric images is essential for computational neuroscience. The centerline of the axonal and dendritic tree provides an effective shape representation and serves as a basis for further neuron reconstruction. However, it is still a challenge to directly extract the accurate centerline from the complex neuron structure with poor image quality. In this paper, we propose a neuron centerline extraction method based on a 3D tubular flux model via a two-stage CNN framework. In the first stage, a 3D CNN is used to learn the latent neuron structure features, namely flux features, from neuron images. In the second stage, a light-weight U-Net takes the learned flux features as input to extract the centerline with a spatial weighted average strategy to constrain the multi-voxel width response. Specifically, the labels of flux features in the first stage are generated by the 3D tubular model which calculates the geometric representations of the flux between each voxel in the tubular region and the nearest point on the centerline ground truth. Compared with self-learned features by networks, flux features, as a kind of prior knowledge, explicitly take advantage of the contextual distance and direction distribution information around the centerline, which is beneficial for the precise centerline extraction. Experiments on two challenging datasets demonstrate that the proposed method outperforms other state-of-the-art methods by 18% and 35.1% in F1-measurement and average distance scores at the most, and the extracted centerline is helpful to improve the neuron reconstruction performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Deep Learning-Based Detection and Correction of Cardiac MR Motion Artefacts During Reconstruction for High-Quality Segmentation.
- Author
-
Oksuz, Ilkay, Clough, James R., Ruijsink, Bram, Anton, Esther Puyol, Bustin, Aurelien, Cruz, Gastao, Prieto, Claudia, King, Andrew P., and Schnabel, Julia A.
- Subjects
- *
DEEP learning , *COMPUTER-assisted image analysis (Medicine) , *K-spaces , *IMAGE analysis , *IMAGE reconstruction algorithms , *IMAGE reconstruction , *IMAGE segmentation - Abstract
Segmenting anatomical structures in medical images has been successfully addressed with deep learning methods for a range of applications. However, this success is heavily dependent on the quality of the image that is being segmented. A commonly neglected point in the medical image analysis community is the vast amount of clinical images that have severe image artefacts due to organ motion, movement of the patient and/or image acquisition related issues. In this paper, we discuss the implications of image motion artefacts on cardiac MR segmentation and compare a variety of approaches for jointly correcting for artefacts and segmenting the cardiac cavity. The method is based on our recently developed joint artefact detection and reconstruction method, which reconstructs high quality MR images from k-space using a joint loss function and essentially converts the artefact correction task to an under-sampled image reconstruction task by enforcing a data consistency term. In this paper, we propose to use a segmentation network coupled with this in an end-to-end framework. Our training optimises three different tasks: 1) image artefact detection, 2) artefact correction and 3) image segmentation. We train the reconstruction network to automatically correct for motion-related artefacts using synthetically corrupted cardiac MR k-space data and uncorrected reconstructed images. Using a test set of 500 2D+time cine MR acquisitions from the UK Biobank data set, we achieve demonstrably good image quality and high segmentation accuracy in the presence of synthetic motion artefacts. We showcase better performance compared to various image correction architectures. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
40. Impedance-Optical Dual-Modal Cell Culture Imaging With Learning-Based Information Fusion.
- Author
-
Liu, Zhe, Bagnaninchi, Pierre, and Yang, Yunjie
- Subjects
- *
CELL imaging , *ELECTRICAL impedance tomography , *IMAGE processing , *CELL culture , *TISSUE engineering , *DEEP learning - Abstract
While Electrical Impedance Tomography (EIT) has found many biomedicine applications, better image quality is needed to provide quantitative analysis for tissue engineering and regenerative medicine. This paper reports an impedance-optical dual-modal imaging framework that primarily targets at high-quality 3D cell culture imaging and can be extended to other tissue engineering applications. The framework comprises three components, i.e., an impedance-optical dual-modal sensor, the guidance image processing algorithm, and a deep learning model named multi-scale feature cross fusion network (MSFCF-Net) for information fusion. The MSFCF-Net has two inputs, i.e., the EIT measurement and a binary mask image generated by the guidance image processing algorithm, whose input is an RGB microscopic image. The network then effectively fuses the information from the two different imaging modalities and generates the final conductivity image. We assess the performance of the proposed dual-modal framework by numerical simulation and MCF-7 cell imaging experiments. The results show that the proposed method could improve the image quality notably, indicating that impedance-optical joint imaging has the potential to reveal the structural and functional information of tissue-level targets simultaneously. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Structure-Guided Segmentation for 3D Neuron Reconstruction.
- Author
-
Yang, Bo, Liu, Min, Wang, Yaonan, Zhang, Kang, and Meijering, Erik
- Subjects
- *
THREE-dimensional imaging , *IMAGE segmentation , *NEURONS , *SIGNAL-to-noise ratio , *IMAGE reconstruction - Abstract
Digital reconstruction of neuronal morphologies in 3D microscopy images is critical in the field of neuroscience. However, most existing automatic tracing algorithms cannot obtain accurate neuron reconstruction when processing 3D neuron images contaminated by strong background noises or containing weak filament signals. In this paper, we present a 3D neuron segmentation network named Structure-Guided Segmentation Network (SGSNet) to enhance weak neuronal structures and remove background noises. The network contains a shared encoding path but utilizes two decoding paths called Main Segmentation Branch (MSB) and Structure-Detection Branch (SDB), respectively. MSB is trained on binary labels to acquire the 3D neuron image segmentation maps. However, the segmentation results in challenging datasets often contain structural errors, such as discontinued segments of the weak-signal neuronal structures and missing filaments due to low signal-to-noise ratio (SNR). Therefore, SDB is presented to detect the neuronal structures by regressing neuron distance transform maps. Furthermore, a Structure Attention Module (SAM) is designed to integrate the multi-scale feature maps of the two decoding paths, and provide contextual guidance of structural features from SDB to MSB to improve the final segmentation performance. In the experiments, we evaluate our model in two challenging 3D neuron image datasets, the BigNeuron dataset and the Extended Whole Mouse Brain Sub-image (EWMBS) dataset. When using different tracing methods on the segmented images produced by our method rather than other state-of-the-art segmentation methods, the distance scores gain 42.48% and 35.83% improvement in the BigNeuron dataset and 37.75% and 23.13% in the EWMBS dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. U-LanD: Uncertainty-Driven Video Landmark Detection.
- Author
-
Jafari, Mohammad H., Luong, Christina, Tsang, Michael, Gu, Ang Nan, Van Woudenberg, Nathan, Rohling, Robert, Tsang, Teresa, and Abolmaesumi, Purang
- Subjects
- *
ULTRASONIC imaging , *VIDEOS , *CARDIAC imaging , *VIDEO surveillance - Abstract
This paper presents U-LanD, a framework for automatic detection of landmarks on key frames of the video by leveraging the uncertainty of landmark prediction. We tackle a specifically challenging problem, where training labels are noisy and highly sparse. U-LanD builds upon a pivotal observation: a deep Bayesian landmark detector solely trained on key video frames, has significantly lower predictive uncertainty on those frames vs. other frames in videos. We use this observation as an unsupervised signal to automatically recognize key frames on which we detect landmarks. As a test-bed for our framework, we use ultrasound imaging videos of the heart, where sparse and noisy clinical labels are only available for a single frame in each video. Using data from 4,493 patients, we demonstrate that U-LanD can exceedingly outperform the state-of-the-art non-Bayesian counterpart by a noticeable absolute margin of 42% in ${R}^{{2}}$ score, with almost no overhead imposed on the model size. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. WDCCNet: Weighted Double-Classifier Constraint Neural Network for Mammographic Image Classification.
- Author
-
Wang, Yan, Wang, Zizhou, Feng, Yangqin, and Zhang, Lei
- Subjects
- *
CONVOLUTIONAL neural networks , *DEEP learning , *FEATURE extraction , *BREAST cancer - Abstract
The early detection and timely treatment of breast cancer can save lives. Mammography is one of the most efficient approaches to screening early breast cancer. An automatic mammographic image classification method could improve the work efficiency of radiologists. Current deep learning-based methods typically use the traditional softmax loss to optimize the feature extraction part, which aims to learn the features of mammographic images. However, previous studies have shown that the feature extraction part cannot learn discriminative features from complex data using the standard softmax loss. In this paper, we design a new architecture and propose respective loss functions. Specifically, we develop a double-classifier network architecture that constrains the extracted features’ distribution by changing the classifiers’ decision boundaries. Then, we propose the double-classifier constraint loss function to constrain the decision boundaries so that the feature extraction part can learn discriminative features. Furthermore, by taking advantage of the architecture of two classifiers, the neural network can detect the difficult-to-classify samples. We propose a weighted double-classifier constraint method to make the feature extract part pay more attention to learning difficult-to-classify samples’ features. Our proposed method can be easily applied to an existing convolutional neural network to improve mammographic image classification performance. We conducted extensive experiments to evaluate our methods on three public benchmark mammographic image datasets. The results showed that our methods outperformed many other similar methods and state-of-the-art methods on the three public medical benchmarks. Our code and weights can be found on GitHub. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Propagating Uncertainty Across Cascaded Medical Imaging Tasks for Improved Deep Learning Inference.
- Author
-
Mehta, Raghav, Christinck, Thomas, Nair, Tanya, Bussy, Aurelie, Premasiri, Swapna, Costantino, Manuela, Chakravarthy, M. Mallar, Arnold, Douglas L., Gal, Yarin, and Arbel, Tal
- Subjects
- *
DEEP learning , *IMAGE registration , *COMPUTER-assisted image analysis (Medicine) , *DIAGNOSTIC imaging , *BRAIN tumors , *DETERMINISTIC processes - Abstract
Although deep networks have been shown to perform very well on a variety of medical imaging tasks, inference in the presence of pathology presents several challenges to common models. These challenges impede the integration of deep learning models into real clinical workflows, where the customary process of cascading deterministic outputs from a sequence of image-based inference steps (e.g. registration, segmentation) generally leads to an accumulation of errors that impacts the accuracy of downstream inference tasks. In this paper, we propose that by embedding uncertainty estimates across cascaded inference tasks, performance on the downstream inference tasks should be improved. We demonstrate the effectiveness of the proposed approach in three different clinical contexts: (i) We demonstrate that by propagating T2 weighted lesion segmentation results and their associated uncertainties, subsequent T2 lesion detection performance is improved when evaluated on a proprietary large-scale, multi-site, clinical trial dataset acquired from patients with Multiple Sclerosis. (ii) We show an improvement in brain tumour segmentation performance when the uncertainty map associated with a synthesised missing MR volume is provided as an additional input to a follow-up brain tumour segmentation network, when evaluated on the publicly available BraTS-2018 dataset. (iii) We show that by propagating uncertainties from a voxel-level hippocampus segmentation task, the subsequent regression of the Alzheimer’s disease clinical score is improved. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Semantic-Oriented Labeled-to-Unlabeled Distribution Translation for Image Segmentation.
- Author
-
Guo, Xiaoqing, Liu, Jie, and Yuan, Yixuan
- Subjects
- *
IMAGE segmentation , *SUPERVISED learning , *DEEP learning , *DATA distribution , *SOURCE code , *DIAGNOSTIC imaging - Abstract
Automatic medical image segmentation plays a crucial role in many medical applications, such as disease diagnosis and treatment planning. Existing deep learning based models usually regarded the segmentation task as pixel-wise classification and neglected the semantic correlations of pixels across different images, leading to vague feature distribution. Moreover, pixel-wise annotated data is rare in medical domain, and the scarce annotated data usually exhibits the biased distribution against the desired one, hindering the performance improvement under the supervised learning setting. In this paper, we propose a novel Labeled-to-unlabeled Distribution Translation (L2uDT) framework with Semantic-oriented Contrastive Learning (SoCL), mainly for addressing the aforementioned issues in medical image segmentation. In SoCL, a semantic grouping module is designed to cluster pixels into a set of semantically coherent groups, and a semantic-oriented contrastive loss is advanced to constrain group-wise prototypes, so as to explicitly learn a feature space with intra-class compactness and inter-class separability. We then establish a L2uDT strategy to approximate the desired data distribution for unbiased optimization, where we translate the labeled data distribution with the guidance of extensive unlabeled data. In particular, a bias estimator is devised to measure the distribution bias, then a gradual-paced shift is derived to progressively translate the labeled data distribution to unlabeled one. Both labeled and translated data are leveraged to optimize the segmentation model simultaneously. We illustrate the effectiveness of the proposed method on two benchmark datasets, EndoScene and PROSTATEx, and our method achieves state-of-the-art performance, which clearly demonstrates its effectiveness for medical image segmentation. The source code is available at https://github.com/CityU-AIM-Group/L2uDT. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. SMU-Net: Saliency-Guided Morphology-Aware U-Net for Breast Lesion Segmentation in Ultrasound Image.
- Author
-
Ning, Zhenyuan, Zhong, Shengzhou, Feng, Qianjin, Chen, Wufan, and Zhang, Yu
- Subjects
- *
ULTRASONIC imaging , *BREAST , *CONVOLUTIONAL neural networks , *DEEP learning , *IMAGE segmentation , *BREAST ultrasound , *OBJECT recognition (Computer vision) , *LEARNING ability - Abstract
Deep learning methods, especially convolutional neural networks, have been successfully applied to lesion segmentation in breast ultrasound (BUS) images. However, pattern complexity and intensity similarity between the surrounding tissues (i.e., background) and lesion regions (i.e., foreground) bring challenges for lesion segmentation. Considering that such rich texture information is contained in background, very few methods have tried to explore and exploit background-salient representations for assisting foreground segmentation. Additionally, other characteristics of BUS images, i.e., 1) low-contrast appearance and blurry boundary, and 2) significant shape and position variation of lesions, also increase the difficulty in accurate lesion segmentation. In this paper, we present a saliency-guided morphology-aware U-Net (SMU-Net) for lesion segmentation in BUS images. The SMU-Net is composed of a main network with an additional middle stream and an auxiliary network. Specifically, we first propose generation of saliency maps which incorporate both low-level and high-level image structures, for foreground and background. These saliency maps are then employed to guide the main network and auxiliary network for respectively learning foreground-salient and background-salient representations. Furthermore, we devise an additional middle stream which basically consists of background-assisted fusion, shape-aware, edge-aware and position-aware units. This stream receives the coarse-to-fine representations from the main network and auxiliary network for efficiently fusing the foreground-salient and background-salient features and enhancing the ability of learning morphological information for network. Extensive experiments on five datasets demonstrate higher performance and superior robustness to the scale of dataset than several state-of-the-art deep learning approaches in breast lesion segmentation in ultrasound image. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. Arterial Spin Labeling Images Synthesis From sMRI Using Unbalanced Deep Discriminant Learning.
- Author
-
Huang, Wei, Luo, Mingyuan, Liu, Xi, Zhang, Peng, Ding, Huijun, Xue, Wufeng, and Ni, Dong
- Subjects
- *
SPIN labels , *DEEP learning , *RETINAL imaging , *MAGNETIC resonance imaging , *DIAGNOSTIC imaging , *ULTRASONIC imaging , *DRUG labeling , *LABELS - Abstract
Adequate medical images are often indispensable in contemporary deep learning-based medical imaging studies, although the acquisition of certain image modalities may be limited due to several issues including high costs and patients issues. However, thanks to recent advances in deep learning techniques, the above tough problem can be substantially alleviated by medical images synthesis, by which various modalities including T1/T2/DTI MRI images, PET images, cardiac ultrasound images, retinal images, and so on, have already been synthesized. Unfortunately, the arterial spin labeling (ASL) image, which is an important fMRI indicator in dementia diseases diagnosis nowadays, has never been comprehensively investigated for the synthesis purpose yet. In this paper, ASL images have been successfully synthesized from structural magnetic resonance images for the first time. Technically, a novel unbalanced deep discriminant learning-based model equipped with new ResNet sub-structures is proposed to realize the synthesis of ASL images from structural magnetic resonance images. The extensive experiments have been conducted. Comprehensive statistical analyses reveal that: 1) this newly introduced model is capable to synthesize ASL images that are similar towards real ones acquired by actual scanning; 2) synthesized ASL images obtained by the new model have demonstrated outstanding performance when undergoing rigorous tests of region-based and voxel-based corrections of partial volume effects, which are essential in ASL images processing; and 3) it is also promising that the diagnosis performance of dementia diseases can be significantly improved with the help of synthesized ASL images obtained by the new model, based on a multi-modal MRI dataset containing 355 demented patients in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
48. Deep Learning for Fast and Spatially Constrained Tissue Quantification From Highly Accelerated Data in Magnetic Resonance Fingerprinting.
- Author
-
Fang, Zhenghan, Chen, Yong, Liu, Mingxia, Xiang, Lei, Zhang, Qian, Wang, Qian, Lin, Weili, and Shen, Dinggang
- Subjects
- *
DEEP learning , *MAGNETIC resonance , *HUMAN body , *RESOURCE recovery facilities , *FEATURE extraction - Abstract
Magnetic resonance fingerprinting (MRF) is a quantitative imaging technique that can simultaneously measure multiple important tissue properties of human body. Although MRF has demonstrated improved scan efficiency as compared to conventional techniques, further acceleration is still desired for translation into routine clinical practice. The purpose of this paper is to accelerate MRF acquisition by developing a new tissue quantification method for MRF that allows accurate quantification with fewer sampling data. Most of the existing approaches use the MRF signal evolution at each individual pixel to estimate tissue properties, without considering the spatial association among neighboring pixels. In this paper, we propose a spatially constrained quantification method that uses the signals at multiple neighboring pixels to better estimate tissue properties at the central pixel. Specifically, we design a unique two-step deep learning model that learns the mapping from the observed signals to the desired properties for tissue quantification, i.e.: 1) with a feature extraction module for reducing the dimension of signals by extracting a low-dimensional feature vector from the high-dimensional signal evolution and 2) a spatially constrained quantification module for exploiting the spatial information from the extracted feature maps to generate the final tissue property map. A corresponding two-step training strategy is developed for network training. The proposed method is tested on highly undersampled MRF data acquired from human brains. Experimental results demonstrate that our method can achieve accurate quantification for T1 and T2 relaxation times by using only 1/4 time points of the original sequence (i.e., four times of acceleration for MRF acquisition). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
49. Virtual Adversarial Training-Based Deep Feature Aggregation Network From Dynamic Effective Connectivity for MCI Identification.
- Author
-
Li, Yang, Liu, Jingyu, Jiang, Yiqiao, Liu, Yu, and Lei, Baiying
- Subjects
- *
CONVOLUTIONAL neural networks , *LARGE-scale brain networks , *MILD cognitive impairment , *GRAPH connectivity , *KALMAN filtering - Abstract
Dynamic functional connectivity (dFC) network inferred from resting-state fMRI reveals macroscopic dynamic neural activity patterns for brain disease identification. However, dFC methods ignore the causal influence between the brain regions. Furthermore, due to the complex non-Euclidean structure of brain networks, advanced deep neural networks are difficult to be applied for learning high-dimensional representations from brain networks. In this paper, a group constrained Kalman filter (gKF) algorithm is proposed to construct dynamic effective connectivity (dEC), where the gKF provides a more comprehensive understanding of the directional interaction within the dynamic brain networks than the dFC methods. Then, a novel virtual adversarial training convolutional neural network (VAT-CNN) is employed to extract the local features of dEC. The VAT strategy improves the robustness of the model to adversarial perturbations, and therefore avoids the overfitting problem effectively. Finally, we propose the high-order connectivity weight-guided graph attention networks (cwGAT) to aggregate features of dEC. By injecting the weight information of high-order connectivity into the attention mechanism, the cwGAT provides more effective high-level feature representations than the conventional GAT. The high-level features generated from the cwGAT are applied for binary classification and multiclass classification tasks of mild cognitive impairment (MCI). Experimental results indicate that the proposed framework achieves the classification accuracy of 90.9%, 89.8%, and 82.7% for normal control (NC) vs. early MCI (EMCI), EMCI vs. late MCI (LMCI), and NC vs. EMCI vs. LMCI classification respectively, outperforming the state-of-the-art methods significantly. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
50. Global-Local Transformer for Brain Age Estimation.
- Author
-
He, Sheng, Grant, P. Ellen, and Ou, Yangming
- Subjects
- *
MAGNETIC resonance imaging , *AGE , *DEEP learning , *BIOLOGICAL neural networks , *SOURCE code - Abstract
Deep learning can provide rapid brain age estimation based on brain magnetic resonance imaging (MRI). However, most studies use one neural network to extract the global information from the whole input image, ignoring the local fine-grained details. In this paper, we propose a global-local transformer, which consists of a global-pathway to extract the global-context information from the whole input image and a local-pathway to extract the local fine-grained details from local patches. The fine-grained information from the local patches are fused with the global-context information by the attention mechanism, inspired by the transformer, to estimate the brain age. We evaluate the proposed method on 8 public datasets with 8,379 healthy brain MRIs with the age range of 0–97 years. 6 datasets are used for cross-validation and 2 datasets are used for evaluating the generality. Comparing with other state-of-the-art methods, the proposed global-local transformer reduces the mean absolute error of the estimated ages to 2.70 years and increases the correlation coefficient of the estimated age and the chronological age to 0.9853. In addition, our proposed method provides regional information of which local patches are most informative for brain age estimation. Our source code is available on: https://github.com/shengfly/global-local-transformer. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.