15 results on '"Li, Zuoyong"'
Search Results
2. Object‐aware deep feature extraction for feature matching.
- Author
-
Li, Zuoyong, Wang, Weice, Lai, Taotao, Xu, Haiping, and Keikhosrokiani, Pantea
- Subjects
FEATURE extraction ,IMAGE registration - Abstract
Summary: Feature extraction is a fundamental step in the feature matching task. A lot of studies are devoted to feature extraction. Recent researches propose to extract features by pre‐trained neural networks, and the output is used for feature matching. However, the quality and the quantity of the features extracted by these methods are difficult to meet the requirements for the practical applications. In this article, we propose a two‐stage object‐aware‐based feature matching method. Specifically, the proposed object‐aware block predicts a weighted feature map through a mask predictor and a prefeature extractor, so that the subsequent feature extractor pays more attention to the key regions by using the weighted feature map. In addition, we introduce a state‐of‐the‐art model estimation algorithm to align image pair as the input of the object‐aware block. Furthermore, our method also employs an advanced outlier removal algorithm to further improve matching quality. Experimental results show that our object‐aware‐based feature matching method improves the performance of feature matching compared with several state‐of‐the‐art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Using Feature Correlation Measurement to Improve the Kernel Minimum Squared Error Algorithm
- Author
-
Fan, Zizhu, Li, Zuoyong, Diniz Junqueira Barbosa, Simone, Series editor, Chen, Phoebe, Series editor, Du, Xiaoyong, Series editor, Filipe, Joaquim, Series editor, Kara, Orhun, Series editor, Kotenko, Igor, Series editor, Liu, Ting, Series editor, Sivalingam, Krishna M., Series editor, Washio, Takashi, Series editor, Tan, Tieniu, editor, Li, Xuelong, editor, Chen, Xilin, editor, Zhou, Jie, editor, Yang, Jian, editor, and Cheng, Hong, editor
- Published
- 2016
- Full Text
- View/download PDF
4. Local descriptor margin projections (LDMP) for face recognition
- Author
-
Yang, Zhangjing, Huang, Pu, Wan, Minghua, Zhang, Fanlong, Yang, Guowei, Qian, Chengshan, Zhang, Jincheng, and Li, Zuoyong
- Published
- 2018
- Full Text
- View/download PDF
5. Discriminant feature extraction for image recognition using complete robust maximum margin criterion
- Author
-
Chen, Xiaobo, Cai, Yingfeng, Chen, Long, and Li, Zuoyong
- Published
- 2015
- Full Text
- View/download PDF
6. Image Dehazing Network Based on Multi-scale Feature Extraction
- Author
-
Li Zuoyong, Fuquan Zhang, Feng Ting, and Zhaochai Yu
- Subjects
Scale (ratio) ,business.industry ,Computer science ,Feature extraction ,Computer vision ,Artificial intelligence ,business ,Image (mathematics) - Published
- 2021
- Full Text
- View/download PDF
7. Multi-Perspective Feature Extraction and Fusion Based on Deep Latent Space for Diagnosis of Alzheimer's Diseases.
- Author
-
Gao, Libin, Hu, Zhongyi, Li, Rui, Lu, Xingjin, Li, Zuoyong, Zhang, Xiabin, and Xu, Shiwei
- Subjects
ALZHEIMER'S disease ,FEATURE extraction ,FUNCTIONAL magnetic resonance imaging ,BRAIN diseases ,CONVOLUTIONAL neural networks ,PEARSON correlation (Statistics) - Abstract
Resting-state functional magnetic resonance imaging (rs-fMRI) has been used to construct functional connectivity (FC) in the brain for the diagnosis and analysis of brain disease. Current studies typically use the Pearson correlation coefficient to construct dynamic FC (dFC) networks, and then use this as a network metric to obtain the necessary features for brain disease diagnosis and analysis. This simple observational approach makes it difficult to extract potential high-level FC features from the representations, and also ignores the rich information on spatial and temporal variability in FC. In this paper, we construct the Latent Space Representation Network (LSRNet) and use two stages to train the network. In the first stage, an autoencoder is used to extract potential high-level features and inner connections in the dFC representations. In the second stage, high-level features are extracted using two perspective feature parses. Long Short-Term Memory (LSTM) networks are used to extract spatial and temporal features from the local perspective. Convolutional neural networks extract global high-level features from the global perspective. Finally, the fusion of spatial and temporal features with global high-level features is used to diagnose brain disease. In this paper, the proposed method is applied to the ANDI rs-fMRI dataset, and the classification accuracy reaches 84.6% for NC/eMCI, 95.1% for NC/AD, 80.6% for eMCI/lMCI, 84.2% for lMCI/AD and 57.3% for NC/eMCI/lMCI/AD. The experimental results show that the method has a good classification performance and provides a new approach to the diagnosis of other brain diseases. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
8. FFNet: A simple image dedusting network with feature fusion.
- Author
-
Huang, Jiayan, Li, Zuoyong, Wang, Chuansheng, Yu, Zhaochai, and Cao, Xinrong
- Subjects
DATA augmentation ,IMAGE reconstruction ,FEATURE extraction ,DEEP learning ,CONVOLUTIONAL neural networks - Abstract
Summary: Dust is a common air pollution source. The color of images captured under dusty weather is usually yellow even brown, which reduces scene visibility and causes the loss of image details. To remove the dust and make image scene clear, this article presents a simple and effective image dedusting network called FFNet. In the process of image feature extraction, the FFNet uses several residual blocks with smoothed dilated convolution, that is, common dilated convolution followed by separable and shared (SS) blockwise fully connected operation to extend the receptive field and reduce gridding artifacts caused by common dilated convolution. Furthermore, the FFNet fuses image features from different layers via an adaptive weighting scheme. Due to the difficulty of collecting real dusty images, we used our proposed dusty image synthesis scheme to achieve data augmentation for better network training. Experiments on a series of synthetic and real dusty images demonstrate that the FFNet obtains better image dedusting performance than several state‐of‐the‐art image restoration methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Multi-Class ASD Classification Based on Functional Connectivity and Functional Correlation Tensor via Multi-Source Domain Adaptation and Multi-View Sparse Representation.
- Author
-
Wang, Jun., Zhang, Lichi, Wang, Qian, Chen, Lei, Shi, Jun, Chen, Xiaobo, Li, Zuoyong, and Shen, Dinggang
- Subjects
FUNCTIONAL connectivity ,FUNCTIONAL magnetic resonance imaging ,AUTISM spectrum disorders ,OXYGEN in the blood ,AUTOMATIC classification ,DIAGNOSIS methods - Abstract
The resting-state functional magnetic resonance imaging (rs-fMRI) reflects functional activity of brain regions by blood-oxygen-level dependent (BOLD) signals. Up to now, many computer-aided diagnosis methods based on rs-fMRI have been developed for Autism Spectrum Disorder (ASD). These methods are mostly the binary classification approaches to determine whether a subject is an ASD patient or not. However, the disease often consists of several sub-categories, which are complex and thus still confusing to many automatic classification methods. Besides, existing methods usually focus on the functional connectivity (FC) features in grey matter regions, which only account for a small portion of the rs-fMRI data. Recently, the possibility to reveal the connectivity information in the white matter regions of rs-fMRI has drawn high attention. To this end, we propose to use the patch-based functional correlation tensor (PBFCT) features extracted from rs-fMRI in white matter, in addition to the traditional FC features from gray matter, to develop a novel multi-class ASD diagnosis method in this work. Our method has two stages. Specifically, in the first stage of multi-source domain adaptation (MSDA), the source subjects belonging to multiple clinical centers (thus called as source domains) are all transformed into the same target feature space. Thus each subject in the target domain can be linearly reconstructed by the transformed subjects. In the second stage of multi-view sparse representation (MVSR), a multi-view classifier for multi-class ASD diagnosis is developed by jointly using both views of the FC and PBFCT features. The experimental results using the ABIDE dataset verify the effectiveness of our method, which is capable of accurately classifying each subject into a respective ASD sub-category. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
10. Fuzzy Clustering with Self-growing Net.
- Author
-
Ying, Wenhao, Wang, Jun, Deng, Zhaohong, Zhang, Fuquan, and Li, Zuoyong
- Subjects
FUZZY clustering technique ,FEATURE extraction ,FUZZY logic ,PRINCIPAL components analysis ,ALGORITHMS - Abstract
A novel deep feature mapping method self-growing net (SG-Net) is proposed, and its combination with classical fuzzy c-means (FCM) called SG-Net-FCM is further developed. SG-Net is a feedforward learning structure for nonlinear explicit feature mapping and includes four types of layers, i.e., input, fuzzy mapping, hybrid, and output layers. The fuzzy mapping layer maps the data from input layer to a high-dimensional feature space using TSK fuzzy mapping, i.e. the fuzzy mapping of Takagi–Sugeno–Kang fuzzy system (TSK-FS). Afterward, each layer in SG-Net accepts additional inputs from all preceding layers and provides its own distinguished features by using principal component analysis to all subsequent layers. The final output of SG-Net is fed to FCM. Since SG-Net-FCM is developed based on the TSK fuzzy mapping, it is more interpretable than classical kernelized fuzzy clustering methods. The effectiveness of the proposed clustering algorithm is experimentally verified on UCI datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
11. Illumination-insensitive features for face recognition.
- Author
-
Cheng, Yong, Jiao, Liangbao, Cao, Xuehong, and Li, Zuoyong
- Subjects
HUMAN facial recognition software ,FEATURE extraction ,DIGITAL image processing ,PIXELS ,COMPUTER vision - Abstract
Illumination variation is one of the most challenging problems for robust face recognition. In this paper, after investigating the ratio relationship between two neighboring pixels in a digital image, we proposed two illumination-insensitive features, i.e., the non-directional local reflectance normalization (NDLRN) and the fused multi-directional local reflectance normalization (fMDLRN), which not only effectively reduce illumination difference among facial images under different illumination conditions, but also preserve the facial details. Experimental results show that NDLRN and fMDLRN can significantly alleviate the adverse effect of complex illumination on face recognition. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
12. Modified local entropy-based transition region extraction and thresholding.
- Author
-
Li, Zuoyong, Zhang, David, Xu, Yong, and Liu, Chuancai
- Subjects
ENTROPY (Information theory) ,FEATURE extraction ,IMAGE analysis ,VISUAL perception ,SOFT computing ,ELECTRONIC data processing - Abstract
Abstract: Transition region-based thresholding is a newly developed image binarization technique. Transition region descriptor plays a key role in the process, which greatly affects accuracy of transition region extraction and subsequent thresholding. Local entropy (LE), a classic descriptor, considers only frequency of gray level changes, easily causing those non-transition regions with frequent yet slight gray level changes to be misclassified into transition regions. To eliminate the above limitation, a modified descriptor taking both frequency and degree of gray level changes into account is developed. In addition, in the light of human visual perception, a preprocessing step named image transformation is proposed to simplify original images and further enhance segmentation performance. The proposed algorithm was compared with LE, local fuzzy entropy-based method (LFE) and four other thresholding ones on a variety of images including some NDT images, and the experimental results show its superiority. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
13. URNet: A U-Net based residual network for image dehazing.
- Author
-
Feng, Ting, Wang, Chuansheng, Chen, Xinwei, Fan, Haoyi, Zeng, Kun, and Li, Zuoyong
- Subjects
NETWORK performance ,DEEP learning ,FEATURE extraction - Abstract
Low visibility in hazy weather causes the loss of image details in digital images captured by some imaging devices such as monitors. This paper proposes an end-to-end U-Net based residual network (URNet) to improve the visibility of hazy images. The encoder module of URNet uses hybrid convolution combining standard convolution with dilated convolution to expand the receptive field for extracting image features with more details. The URNet embeds several building blocks of ResNet into the junction between the encoder module and the decoder module. This prevents network performance degradation due to the vanishing gradient. After considering large absolute difference on image saturation and value components between hazy images and haze-free images in the HSV color space, the URNet defines a new loss function to better guide the network training. Experimental results on synthetic hazy images and real hazy images show that the URNet significantly improves the image dehazing effect compared to the state-of-the-art methods. • We proposed a U-Net based residual network (URNet) for image dehazing. • URNet uses hybrid convolution in its encoder module to extract image features better. • URNet embeds several building blocks of ResNet to prevent vanishing gradient. • URNet defines a new loss function to better guide the network training. • Experimental results show that URNet significantly improves image dehazing effect. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Content-based image retrieval using computational visual attention model.
- Author
-
Liu, Guang-Hai, Yang, Jing-Yu, and Li, ZuoYong
- Subjects
- *
CONTENT-based image retrieval , *MATHEMATICAL models , *DATA structures , *INFORMATION theory , *FEATURE extraction , *IMAGE processing - Abstract
It is a very challenging problem to well simulate visual attention mechanisms for content-based image retrieval. In this paper, we propose a novel computational visual attention model, namely saliency structure model, for content-based image retrieval. First, a novel visual cue, namely color volume, with edge information together is introduced to detect saliency regions instead of using the primary visual features (e.g., color, intensity and orientation). Second, the energy feature of the gray-level co-occurrence matrices is used for globally suppressing maps, instead of the local maxima normalization operator in Itti׳s model. Third, a novel image representation method, namely saliency structure histogram, is proposed to stimulate orientation-selective mechanism for image representation within CBIR framework. We have evaluated the performances of the proposed algorithm on two datasets. The experimental results clearly demonstrate that the proposed algorithm significantly outperforms the standard BOW baseline and micro-structure descriptor. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
15. Sparse embedding visual attention system combined with edge information
- Author
-
Zhao, Cairong, Liu, Chuancai, Lai, Zhihui, Rao, Huaming, and Li, Zuoyong
- Subjects
- *
COMPUTER vision , *MATHEMATICAL models , *FEATURE extraction , *IMAGE analysis , *EMBEDDINGS (Mathematics) , *COMPUTER simulation - Abstract
Abstract: Numerous computational models of visual attention have been suggested during the last two decades. But, there are still some challenges such as which of early visual features should be extracted and how to combine these different features into a unique “saliency” map. According to these challenges, we proposed a sparse embedding visual attention system combined with edge information, which is described as a hierarchical model in this paper. In the first stage, we extract edge information besides color, intensity and orientation as early visual features. In the second stage, we present a novel sparse embedding feature combination strategy. Results on different scene images show that our model outperforms other visual attention computational models. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.