5 results on '"Chi, Jianning"'
Search Results
2. Thyroid Nodule Classification in Ultrasound Images by Fine-Tuning Deep Convolutional Neural Network
- Author
-
Chi, Jianning, Walia, Ekta, Babyn, Paul, Wang, Jimmy, Groot, Gary, and Eramian, Mark
- Published
- 2017
- Full Text
- View/download PDF
3. X-Net: Multi-branch UNet-like network for liver and tumor segmentation from 3D abdominal CT scans.
- Author
-
Chi, Jianning, Han, Xiaoying, Wu, Chengdong, Wang, Huan, and Ji, Peng
- Subjects
- *
COMPUTED tomography , *LIVER tumors , *DEEP learning , *ABDOMINAL tumors , *PROBLEM solving , *LIVER , *LIVER cells - Abstract
The diagnosis of liver cancer is one of the most attractive fields in clinical practice for its high mortality. Accurate segmentation of liver and tumor has been publicly accepted to be an effective method to assist doctors in determining the disease condition and planning the subsequent treatments. Recently, deep learning based methods have been widely used in tumor segmentation and provided good performance. However, current methods cannot fully reflect the differences between tumor, inside-liver tissues and outside-liver organs simultaneously, while the extraction of features reflecting axial changes of liver and tumor is always discounted by the heavy computational burden, resulting in limited learning effects and efficiencies. To solve these problems, in this paper, we propose a novel framework to segment liver and tumors in abdominal CT volumes, which consists of two parts: 1) we propose a multi-branch network where an up-sampling branch for liver region recognition and a pyramid-like convolution structure for inner-liver feature extraction are integrated into the back-bone Dense UNet structure for better extracting intra-slice features of liver and tumors; 2) we simplify the traditional 3D UNet by using the convolutional kernels with the fixed size 3 × 3 in x-y plane and apply it as a 3D counterpart for aggregating contextual information along the z-axis from the stacked, filtered CT slices, with the advantages of inhibiting the influence from neighboring pixels and alleviating the computational burden greatly. The above two parts are formulated as a unified end-to-end network so that the intra-slice feature representation and the inter-slice information aggregation can be learned and optimized jointly. Furthermore, we novely define a loss function combining a modified dice loss and a contour-detection based loss, so that the region features and contour features of the predicted segmentation of liver and tumors are jointly considered for network training and parameters optimization. Experimental results on the MICCAI 2017 Liver Tumor Segmentation Challenge dataset and 3DIRCADb dataset have demonstrated that the proposed method can provide superior performance to the state-of-the-art methods with respect to the certain benchmarks for liver and tumor segmentation. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Saliency detection via integrating deep learning architecture and low-level features.
- Author
-
Chi, Jianning, Wu, Chengdong, Yu, Xiaosheng, Chu, Hao, and Ji, Peng
- Subjects
- *
CONDITIONAL probability , *DEEP learning , *OBJECT recognition (Computer vision) , *IMAGE representation , *PROBABILITY theory - Abstract
Deep learning methods, with their good performance in semantic representation of different images, have been widely used for saliency detection. Recent saliency detection methods have applied deep learning to obtain high-level features and combined them with hand-crafted low-level features to estimate saliency in the images. However, it is difficult to find the relationship between high-level and low-level features, resulting in incomplete integration framework for saliency detection. In this paper, we novely propose a saliency detection model by integrating high-level and low-level features with joint probability estimation. Firstly, the high-level features from FCN-8S network are used to estimate the probability of each superpixel as foreground or background region. Secondly, low-level features are extracted from each superpixels and clustered via affinity propagation (AP) clustering. The distributions of vectors from different clusters are consequently utilized to calculate the conditional probability of each superpixel as salient object under different assumptions. Thirdly, the joint probability of each superpixel as salient object in foreground or background is computed to compose the saliency map of the whole image. To further improve the uniformity of saliency in the same object region, the structured random forest (SRF) method is used to detect the contour of the image and the saliency of superpixels in homogeneous regions are uniformly merged. The advantage of high-level features in representing semantic regions and that of low-level features in differentiating local details in the image are unified and restrained by the joint probability estimation in the proposed model. Experimental results demonstrate that the proposed method provide better saliency detection performance than the state-of-the-art methods on 5 public databases. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
5. Underwater image super-resolution using multi-stage information distillation networks.
- Author
-
Wang, Huan, Wu, Hao, Hu, Qian, Chi, Jianning, Yu, Xiaosheng, and Wu, Chengdong
- Subjects
- *
HIGH resolution imaging , *REMOTE submersibles , *ROBOT vision , *RECURRENT neural networks , *FEATURE extraction - Abstract
Recently, single image super-resolution (SISR) has been widely applied in the fields of underwater robot vision and obtained remarkable performance. However, most current methods generally suffered from the problem of a heavy burden on computational resources with large model sizes, which limited their real-world underwater robotic applications. In this paper, we introduce and tackle the super resolution (SR) problem for underwater robot vision and provide an efficient solution for near real-time applications. We present a novel lightweight multi-stage information distillation network, named MSIDN, for better balancing performance against applicability, which aggregates the local distilled features from different stages for more powerful feature representation. Moreover, a novel recursive residual feature distillation (RRFD) module is constructed to progressively extract useful features with a modest number of parameters in each stage. We also propose a channel interaction & distillation (CI&D) module that employs channel split operation on the preceding features to produce two-part features and utilizes the inter channel-wise interaction information between them to generate the distilled features, which can effectively extract the useful information of current stage without extra parameters. Besides, we present USR-2K dataset, a collection of over 1.6K samples for large-scale underwater image SR training, and a testset with an additional 400 samples for benchmark evaluation. Extensive experiments on several standard benchmark datasets show that the proposed MSIDN can provide state-of-the-art or even better performance in both quantitative and qualitative measurements. • A information distillation network is proposed for underwater image super-resolution. • A recursive residual module is constructed for informative features distillation. • A channel-wise interaction mechanism is proposed to generate distilled features. • A novel underwater image super-resolution dataset is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.