12 results on '"Chi, Jianning"'
Search Results
2. Low-Dose CT Image Super-resolution Network with Noise Inhibition Based on Feedback Feature Distillation Mechanism
- Author
-
Chi, Jianning, Wei, Xiaolin, Sun, Zhiyi, Yang, Yongming, and Yang, Bin
- Published
- 2024
- Full Text
- View/download PDF
3. DSTAN: A Deformable Spatial-temporal Attention Network with Bidirectional Sequence Feature Refinement for Speckle Noise Removal in Thyroid Ultrasound Video
- Author
-
Chi, Jianning, Miao, Jian, Chen, Jia-hui, Wang, Huan, Yu, Xiaosheng, and Huang, Ying
- Published
- 2024
- Full Text
- View/download PDF
4. Degradation Adaption Local-to-Global Transformer for Low-Dose CT Image Denoising
- Author
-
Wang, Huan, Chi, Jianning, Wu, Chengdong, Yu, Xiaosheng, and Wu, Hao
- Published
- 2023
- Full Text
- View/download PDF
5. Low-Dose CT Image Super-Resolution Network with Dual-Guidance Feature Distillation and Dual-Path Content Communication
- Author
-
Chi, Jianning, Sun, Zhiyi, Zhao, Tianli, Wang, Huan, Yu, Xiaosheng, Wu, Chengdong, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Greenspan, Hayit, editor, Madabhushi, Anant, editor, Mousavi, Parvin, editor, Salcudean, Septimiu, editor, Duncan, James, editor, Syeda-Mahmood, Tanveer, editor, and Taylor, Russell, editor
- Published
- 2023
- Full Text
- View/download PDF
6. Multi-Modal Brain Tumor Data Completion Based on Reconstruction Consistency Loss.
- Author
-
Jiang, Yang, Zhang, Shuang, and Chi, Jianning
- Subjects
DIGITAL image processing ,DEEP learning ,MAGNETIC resonance imaging ,BRAIN tumors ,DIAGNOSTIC imaging ,ARTIFICIAL neural networks ,ALGORITHMS - Abstract
Multi-modal brain magnetic resonance imaging (MRI) data has been widely applied in vison-based brain tumor segmentation methods due to its complementary diagnostic information from different modalities. Since the multi-modal image data is likely to be corrupted by noise or artifacts during the practical scanning process, making it difficult to build a universal model for the subsequent segmentation and diagnosis with incomplete input data, image completion has become one of the most attractive fields in the medical image pre-processing. It can not only assist clinicians to observe the patient's lesion area more intuitively and comprehensively, but also realize the desire to save costs for patients and reduce the psychological pressure of patients during tedious pathological examinations. Recently, many deep learning-based methods have been proposed to complement the multi-modal image data and provided good performance. However, current methods cannot fully reflect the continuous semantic information between the adjacent slices and the structural information of the intra-slice features, resulting in limited complementation effects and efficiencies. To solve these problems, in this work, we propose a novel generative adversarial network (GAN) framework, named as random generative adversarial network (RAGAN), to complete the missing T1, T1ce, and FLAIR data from the given T2 modal data in real brain MRI, which consists of the following parts: (1) For the generator, we use T2 modal images and multi-modal classification labels from the same sample for cyclically supervised training of image generation, so as to realize the restoration of arbitrary modal images. (2) For the discriminator, a multi-branch network is proposed where the primary branch is designed to judge whether the certain generated modal image is similar to the target modal image, while the auxiliary branch is to judge whether its essential visual features are similar to those of the target modal image. We conduct qualitative and quantitative experimental validations on the BraTs2018 dataset, generating 10,686 MRI data in each missing modality. Real brain tumor morphology images were compared with synthetic brain tumor morphology images using PSNR and SSIM as evaluation metrics. Experiments demonstrate that the brightness, resolution, location, and morphology of brain tissue under different modalities are well reconstructed. Meanwhile, we also use the segmentation network as a further validation experiment. Blend synthetic and real images into a segmentation network. Our segmentation network adopts the classic segmentation network UNet. The segmentation result is 77.58%. In order to prove the value of our proposed method, we use the better segmentation network RES_UNet with depth supervision as the segmentation model, and the segmentation accuracy rate is 88.76%. Although our method does not significantly outperform other algorithms, the DICE value is 2% higher than the current state-of-the-art data completion algorithm TC-MGAN. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Thyroid Nodule Classification in Ultrasound Images by Fine-Tuning Deep Convolutional Neural Network
- Author
-
Chi, Jianning, Walia, Ekta, Babyn, Paul, Wang, Jimmy, Groot, Gary, and Eramian, Mark
- Published
- 2017
- Full Text
- View/download PDF
8. X-Net: Multi-branch UNet-like network for liver and tumor segmentation from 3D abdominal CT scans.
- Author
-
Chi, Jianning, Han, Xiaoying, Wu, Chengdong, Wang, Huan, and Ji, Peng
- Subjects
- *
COMPUTED tomography , *LIVER tumors , *DEEP learning , *ABDOMINAL tumors , *PROBLEM solving , *LIVER , *LIVER cells - Abstract
The diagnosis of liver cancer is one of the most attractive fields in clinical practice for its high mortality. Accurate segmentation of liver and tumor has been publicly accepted to be an effective method to assist doctors in determining the disease condition and planning the subsequent treatments. Recently, deep learning based methods have been widely used in tumor segmentation and provided good performance. However, current methods cannot fully reflect the differences between tumor, inside-liver tissues and outside-liver organs simultaneously, while the extraction of features reflecting axial changes of liver and tumor is always discounted by the heavy computational burden, resulting in limited learning effects and efficiencies. To solve these problems, in this paper, we propose a novel framework to segment liver and tumors in abdominal CT volumes, which consists of two parts: 1) we propose a multi-branch network where an up-sampling branch for liver region recognition and a pyramid-like convolution structure for inner-liver feature extraction are integrated into the back-bone Dense UNet structure for better extracting intra-slice features of liver and tumors; 2) we simplify the traditional 3D UNet by using the convolutional kernels with the fixed size 3 × 3 in x-y plane and apply it as a 3D counterpart for aggregating contextual information along the z-axis from the stacked, filtered CT slices, with the advantages of inhibiting the influence from neighboring pixels and alleviating the computational burden greatly. The above two parts are formulated as a unified end-to-end network so that the intra-slice feature representation and the inter-slice information aggregation can be learned and optimized jointly. Furthermore, we novely define a loss function combining a modified dice loss and a contour-detection based loss, so that the region features and contour features of the predicted segmentation of liver and tumors are jointly considered for network training and parameters optimization. Experimental results on the MICCAI 2017 Liver Tumor Segmentation Challenge dataset and 3DIRCADb dataset have demonstrated that the proposed method can provide superior performance to the state-of-the-art methods with respect to the certain benchmarks for liver and tumor segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Adaptive Aggregated Attention Network for Pulmonary Nodule Classification.
- Author
-
Xia, Kai, Chi, Jianning, Gao, Yuan, Jiang, Yang, and Wu, Chengdong
- Subjects
PULMONARY nodules ,IMAGE databases ,CANCER-related mortality ,CLASSIFICATION ,LUNG cancer - Abstract
Lung cancer has one of the highest cancer mortality rates in the world and threatens people's health. Timely and accurate diagnosis can greatly reduce the number of deaths. Therefore, an accurate diagnosis system is extremely important. The existing methods have achieved significant performances on lung cancer diagnosis, but they are insufficient in fine-grained representations. In this paper, we propose a novel attentive method to differentiate malignant and benign pulmonary nodules. Firstly, the residual attention network (RAN) and squeeze-and-excitation network (SEN) were utilized to extract spatial and contextual features. Secondly, a novel multi-scale attention network (MSAN) was proposed to capture multi-scale attention features automatically, and the MSAN integrated the advantages of the spatial attention mechanism and contextual attention mechanism, which are very important for capturing the salient features of nodules. Finally, the gradient boosting machine (GBM) algorithm was used to differentiate malignant and benign nodules. We conducted a series of experiments on the Lung Image Database Consortium image collection (LIDC-IDRI) database, achieving an accuracy of 91.9%, a sensitivity of 91.3%, a false positive rate of 8.0%, and an F1-score of 91.0%. The experimental results demonstrate that our proposed method outperforms the state-of-the-art methods with respect to accuracy, false positive rate, and F1-Score. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. Saliency detection via integrating deep learning architecture and low-level features.
- Author
-
Chi, Jianning, Wu, Chengdong, Yu, Xiaosheng, Chu, Hao, and Ji, Peng
- Subjects
- *
CONDITIONAL probability , *DEEP learning , *OBJECT recognition (Computer vision) , *IMAGE representation , *PROBABILITY theory - Abstract
Deep learning methods, with their good performance in semantic representation of different images, have been widely used for saliency detection. Recent saliency detection methods have applied deep learning to obtain high-level features and combined them with hand-crafted low-level features to estimate saliency in the images. However, it is difficult to find the relationship between high-level and low-level features, resulting in incomplete integration framework for saliency detection. In this paper, we novely propose a saliency detection model by integrating high-level and low-level features with joint probability estimation. Firstly, the high-level features from FCN-8S network are used to estimate the probability of each superpixel as foreground or background region. Secondly, low-level features are extracted from each superpixels and clustered via affinity propagation (AP) clustering. The distributions of vectors from different clusters are consequently utilized to calculate the conditional probability of each superpixel as salient object under different assumptions. Thirdly, the joint probability of each superpixel as salient object in foreground or background is computed to compose the saliency map of the whole image. To further improve the uniformity of saliency in the same object region, the structured random forest (SRF) method is used to detect the contour of the image and the saliency of superpixels in homogeneous regions are uniformly merged. The advantage of high-level features in representing semantic regions and that of low-level features in differentiating local details in the image are unified and restrained by the joint probability estimation in the proposed model. Experimental results demonstrate that the proposed method provide better saliency detection performance than the state-of-the-art methods on 5 public databases. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
11. MID-UNet: Multi-input directional UNet for COVID-19 lung infection segmentation from CT images.
- Author
-
Chi, Jianning, Zhang, Shuang, Han, Xiaoying, Wang, Huan, Wu, Chengdong, and Yu, Xiaosheng
- Subjects
- *
COVID-19 , *COMPUTED tomography , *LUNGS , *LUNG infections , *COVID-19 pandemic , *DEEP learning - Abstract
Coronavirus Disease 2019 (COVID-19) has spread globally since the first case was reported in December 2019, becoming a world-wide existential health crisis with over 90 million total confirmed cases. Segmentation of lung infection from computed tomography (CT) scans via deep learning method has a great potential in assisting the diagnosis and healthcare for COVID-19. However, current deep learning methods for segmenting infection regions from lung CT images suffer from three problems: (1) Low differentiation of semantic features between the COVID-19 infection regions, other pneumonia regions and normal lung tissues; (2) High variation of visual characteristics between different COVID-19 cases or stages; (3) High difficulty in constraining the irregular boundaries of the COVID-19 infection regions. To solve these problems, a multi-input directional UNet (MID-UNet) is proposed to segment COVID-19 infections in lung CT images. For the input part of the network, we firstly propose an image blurry descriptor to reflect the texture characteristic of the infections. Then the original CT image, the image enhanced by the adaptive histogram equalization, the image filtered by the non-local means filter and the blurry feature map are adopted together as the input of the proposed network. For the structure of the network, we propose the directional convolution block (DCB) which consist of 4 directional convolution kernels. DCBs are applied on the short-cut connections to refine the extracted features before they are transferred to the de-convolution parts. Furthermore, we propose a contour loss based on local curvature histogram then combine it with the binary cross entropy (BCE) loss and the intersection over union (IOU) loss for better segmentation boundary constraint. Experimental results on the COVID-19-CT-Seg dataset demonstrate that our proposed MID-UNet provides superior performance over the state-of-the-art methods on segmenting COVID-19 infections from CT images. • We propose a multidimensional input to reflect the GGOs or infiltration features. • We propose a directional convolution block to represent the fibrotic-streak features. • We propose a novel region contour loss function to restric irregular boundaries. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. DCLNet: Dual Closed-loop Networks for face super-resolution.
- Author
-
Wang, Huan, Hu, Qian, Wu, Chengdong, Chi, Jianning, Yu, Xiaosheng, and Wu, Hao
- Subjects
- *
HIGH resolution imaging , *FACE , *PRIOR learning , *DEEP learning - Abstract
Recently, deep-learning based face super-resolution methods have succeeded in hallucinating high-resolution face images from low-resolution inputs. However, since the given face images have tiny resolution and arbitrary characteristics which need to be reconstructed at high magnification factors, existing methods still suffer from large space of possible mapping functions, resulting in the limited performance in producing sharp textures. In this paper, we propose a novel CNN-based Dual Closed-loop Network (DCLNet) to minimize the possible mapping space. To that end, we design two dual learning networks to form the dual closed-loop structure with a primary face super-resolution network, which can provide the primary branch with additional prior constraint to guide the essential facial features restoring. Our work represents the first attempt to introduce multiple dual learning networks into face super-resolution model to constrain the possible mapping space. In addition, a progressive facial prior estimation framework and a new prior-guided feature enhancement module are presented to integrate the facial prior knowledge and guide the face image super-resolution. By generating multiple facial components maps for the activation of essential facial parts, our enhancement module can address the difficulty in learning and integrating strong priors into a face super-resolution model. In this way, the collaboration between the face super-resolution and alignment processes can be enhanced with effect. Extensive experiments are implemented on CelebA and Helen dataset, showing that our proposed method can provide state-of-the-art or even better performance in both quantitative and qualitative measurements. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.