71 results on '"Jing, Xiao-Yuan"'
Search Results
2. Single-/Multi-Source Domain Adaptation via domain separation: A simple but effective method
- Author
-
Cai, Ziyun, Zhang, Dandan, Zhang, Tengfei, Hu, Changhui, and Jing, Xiao-Yuan
- Published
- 2023
- Full Text
- View/download PDF
3. Learning enhanced specific representations for multi-view feature learning
- Author
-
Hao, Yaru, Jing, Xiao-Yuan, Chen, Runhang, and Liu, Wei
- Published
- 2023
- Full Text
- View/download PDF
4. Dual contrastive universal adaptation network for multi-source visual recognition
- Author
-
Cai, Ziyun, Zhang, Tengfei, Ma, Fumin, and Jing, Xiao-Yuan
- Published
- 2022
- Full Text
- View/download PDF
5. Adaptive multi-scale transductive information propagation for few-shot learning
- Author
-
Fu, Sichao, Liu, Baodi, Liu, Weifeng, Zou, Bin, You, Xinhua, Peng, Qinmu, and Jing, Xiao-Yuan
- Published
- 2022
- Full Text
- View/download PDF
6. Aligned metric representation based balanced multiset ensemble learning for heterogeneous defect prediction
- Author
-
Chen, Haowen, Jing, Xiao-Yuan, Zhou, Yuming, Li, Bing, and Xu, Baowen
- Published
- 2022
- Full Text
- View/download PDF
7. Adaptive deformable convolutional network
- Author
-
Chen, Feng, Wu, Fei, Xu, Jing, Gao, Guangwei, Ge, Qi, and Jing, Xiao-Yuan
- Published
- 2021
- Full Text
- View/download PDF
8. Structured discriminative tensor dictionary learning for unsupervised domain adaptation
- Author
-
Wu, Songsong, Yan, Yan, Tang, Hao, Qian, Jianjun, Zhang, Jian, Dong, Yuning, and Jing, Xiao-Yuan
- Published
- 2021
- Full Text
- View/download PDF
9. Multi-view semantic learning network for point cloud based 3D object detection
- Author
-
Yang, Yongguang, Chen, Feng, Wu, Fei, Zeng, Deliang, Ji, Yi-mu, and Jing, Xiao-Yuan
- Published
- 2020
- Full Text
- View/download PDF
10. Group sparse additive machine with average top-k loss
- Author
-
Yuan, Peipei, You, Xinge, Chen, Hong, Peng, Qinmu, Zhao, Yue, Xu, Zhou, Jing, Xiao-Yuan, and He, Zhenyu
- Published
- 2020
- Full Text
- View/download PDF
11. Dynamic attention network for semantic segmentation
- Author
-
Wu, Fei, Chen, Feng, Jing, Xiao-Yuan, Hu, Chang-Hui, Ge, Qi, and Ji, Yimu
- Published
- 2020
- Full Text
- View/download PDF
12. Software effort estimation based on open source projects: Case study of Github
- Author
-
Qi, Fumin, Jing, Xiao-Yuan, Zhu, Xiaoke, Xie, Xiaoyuan, Xu, Baowen, and Ying, Shi
- Published
- 2017
- Full Text
- View/download PDF
13. Image denoising using weighted nuclear norm minimization with multiple strategies
- Author
-
Liu, Xiaohua, Jing, Xiao-Yuan, Tang, Guijin, Wu, Fei, and Ge, Qi
- Published
- 2017
- Full Text
- View/download PDF
14. Semi-supervised multi-view graph convolutional networks with application to webpage classification.
- Author
-
Wu, Fei, Jing, Xiao-Yuan, Wei, Pengfei, Lan, Chao, Ji, Yimu, Jiang, Guo-Ping, and Huang, Qinghua
- Subjects
- *
REPRESENTATIONS of graphs , *CLASSIFICATION , *LEARNING modules , *SPACE frame structures - Abstract
Semi-supervised multi-view learning (SML) is a hot research topic in recent years, with webpage classification being a typical application domain. The performance of SML is further boosted by the successful introduction of graph convolutional network (GCN) for learning discriminant node representations. However, there remains much space to improve the GCN-based SML technique, particularly on how to adaptively learn optimal graph structures for multi-view graph convolutional representation learning and make full use of the label and structure information in labeled and unlabeled multi-view samples. In this paper, we propose a novel SML approach named semi-supervised multi-view graph convolutional networks (SMGCN) for webpage classification. It contains a multi-view graph construction module and a semi-supervised multi-view graph convolutional representation learning module, which are integrated into a unified network architecture. The former aims to obtain optimal graph structure for each view. And the latter performs graph convolutional representation learning for each view, and provides an inter-view attention scheme to fuse multi-view representations. Network training is guided by the losses defined on both label and feature spaces, such that the label and structure information in labeled and unlabeled data is fully explored. Experiments on two widely used webpage datasets demonstrate that SMGCN can achieve state-of-the-art classification performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Mining negative samples on contrastive learning via curricular weighting strategy.
- Author
-
Zhuang, Jin, Jing, Xiao-Yuan, and Jia, Xiaodong
- Subjects
- *
LEARNING , *LEARNING strategies - Abstract
Contrastive learning, which pulls positive pairs closer and pushes away negative pairs, has remarkably propelled the development of self-supervised representation learning. Previous studies either neglected negative sample selection, resulting in suboptimal performance, or emphasized hard negative samples from the beginning of training, potentially leading to convergence issues. Drawing inspiration from curriculum learning, we find that learning with negative samples ranging from easy to hard improves both model performance and convergence rate. Therefore, we propose a dynamic negative sample weighting strategy for contrastive learning. Specifically, we design a loss function that adaptively adjusts the weights assigned to negative samples based on the model's performance. Initially, the loss prioritizes easy samples, but as training advances, it shifts focus to hard samples, enabling the model to learn more discriminative representations. Furthermore, to prevent an undue emphasis on false negative samples during later stages, which probably results in trivial solutions, we apply L 2 regularization on the weights of hard negative samples. Extensive qualitative and quantitative experiments demonstrate the effectiveness of the proposed weighting strategy. The ablation study confirms both the reasonableness of the curriculum and the effectiveness of the regularization. • The strategy for selecting negative samples is closely related to the performance of contrastive learning. • Employing a negative sample selection curriculum from easy to hard improves contrastive learning. • Regularizing the weights of negative samples can effectively mitigate the influence of false negatives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Distance learning by mining hard and easy negative samples for person re-identification.
- Author
-
Zhu, Xiaoke, Jing, Xiao-Yuan, Zhang, Fan, Zhang, Xinyu, You, Xinge, and Cui, Xiang
- Subjects
- *
DISTANCE education , *IDENTIFICATION - Abstract
• We have proposed a Hard and Easy Negative samples mining based Distance learning (HEND) approach for person re-identification. • We have designed a symmetric triplet constraint for the proposed HEND approach. • We have proposed a Projection based HEND (PHEND) approach, which simultaneously learns a projection matrix and a distance metric. • We have conducted extensive experiments in this paper to evaluate our approaches. Distance learning is an effective technique for person re-identification. In practice, the hard negative samples usually contain more discriminative information than the easy negative samples. Therefore, it's necessary to investigate how to make full use of the discriminative information conveyed by different types of negative samples in the distance learning process. In this paper, we propose a H ard and E asy N egative samples mining based D istance learning (HEND) approach for person re-identification, which learns the distance metric by designing different objective functions for hard and easy negative samples, such that the discriminative information contained in negative samples can be exploited more effectively. Moreover, considering that there usually exist large differences between the images captured by different cameras, we further propose a projection-based HEND approach to reduce the influence of between-camera differences to the re-identification. Experimental results on seven pedestrian image datasets demonstrate the effectiveness of the proposed approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
17. Task-specific parameter decoupling for class incremental learning.
- Author
-
Chen, Runhang, Jing, Xiao-Yuan, Wu, Fei, Zheng, Wei, and Hao, Yaru
- Subjects
- *
MACHINE learning - Abstract
Class incremental learning (CIL) enables deep networks to progressively learn new tasks while remembering previously learned knowledge. A popular design for CIL involves applying a shared feature extractor to learn old and new classes. However, this design can lead to representation interference, a phenomenon in which knowledge acquired from different tasks interferes with each other. This limits the ability to maintain feature information from previous tasks, especially without access to training data from previous tasks. To overcome this limitation, we present a novel CIL approach called task-specific model parameter decoupling, which includes a parameter decoupling (PD) framework and a dynamic-parameter-fusion (DPF) strategy. The PD framework compresses the knowledge of each task into a compact set of task-specific model parameters. In this situation, interactions between compact model parameters related to different tasks are eliminated to reduce representation interference. In addition, we adopt a DPF strategy to adaptively fuse the parameters of the larger model. As the learning task progresses, the DPF strategy enhances the model's adaptability and stability. Extensive experiments on benchmark datasets, including CIFAR100 and TinyImageNet, demonstrate the performance improvement of our approach over state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
18. Domain embedding transfer for unequal RGB-D image recognition.
- Author
-
Cai, Ziyun, Jing, Xiao-Yuan, and Shao, Ling
- Subjects
- *
IMAGE recognition (Computer vision) - Abstract
• We propose a novel method which takes advantage of the depth data in the source domain and handles the domain distribution mismatch under label inequality scenario simultaneously. • Different from conventional unsupervised domain adaptation (UDA), we present a new scenario called unequal UDA, which can improve the recognition performance when the source categories are more than the target ones. • The empirical results demonstrate that proposed method can achieve the state-of-the-art performance than other methods on five real-world domain adaptation image dataset pairs. Most recent unsupervised domain adaptation (UDA) approaches concentrate on single RGB source to single RGB target task. They have to face the real-world scenario, where the source domain can be collected from multiple modalities, e.g. , RGB data and depth data. Our work focuses on a more practical and challenging scenario which recognizes RGB images by learning from RGB-D data under the label inequality scenario. We are confronted with three challenges: multiple modalities in the source domain, domain shifting problem and unequal label numbers. To address the aforementioned settings, a novel method, referred to as Domain depth Embedding Transfer (DdET) is proposed, which takes advantage of the depth data in the source domain and handles the domain distribution mismatch under label inequality scenario simultaneously. We conduct comprehensive experiments on five cross domain image classification tasks and observe that DdET can perform favorably against state-of-the-art methods, especially under label inequality scenario. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
19. “Like charges repulsion and opposite charges attraction” law based multilinear subspace analysis for face recognition.
- Author
-
Wu, Fei, Jing, Xiao-Yuan, Wu, Songsong, Gao, Guangwei, Ge, Qi, and Wang, Ruchuan
- Subjects
- *
HUMAN facial recognition software , *MACHINE learning , *FEATURE extraction , *CLUSTER analysis (Statistics) , *IMAGE processing - Abstract
Multiple image variations occur in natural face images, such as the changes of pose, illumination, occlusion and expression. For non-specific variations based face recognition, learning effective features is an important research topic. Subspace learning is a widely used face recognition technique; however, numerous subspace analysis methods do not fully utilize the prior information of facial variations. Tensor-based multilinear subspace analysis methods can take advantage of the prior information, but they need to be further improved. With respect to a single facial variation, we observe that the image samples belonging to the same variation-state but different classes tend to cluster together, whereas those belonging to different variation-states but the same class tend to remain separate. This is adverse to classification. In this paper, motivated by the idea of charge law, “like charges repulsion and opposite charges attraction”, in which like and opposite charges are regarded as same and different variation-states, respectively, we propose a non-specific variations based discriminant analysis (NVDA) criterion. It searches for an optimal discriminant subspace in which samples belonging to same variation-state but different classes are separable, whereas those belonging to different variation-states but same class cluster together. We then propose a novel face recognition approach called non-specific variations based multi-subspace analysis (NVMSA), which serially utilizes NVDA criterion to learn multiple discriminant subspaces corresponding to different variations. In the proposed approach, we design a strategy to select the serial calculation order of variations and provide a rule to choose projection vectors with favorable discriminant capabilities. Furthermore, we formulate the locally statistical orthogonal constraints for the multiple subspaces learning to remove the local correlation of discriminant features obtained from multiple variations. Experiments on the AR, Weizmann, PIE and LFW databases demonstrate the effectiveness and efficiency of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. Multi-view local discrimination and canonical correlation analysis for image classification.
- Author
-
Han, Lu, Jing, Xiao-Yuan, and Wu, Fei
- Subjects
- *
CANONICAL correlation (Statistics) , *MACHINE learning , *DISCRIMINANT analysis , *SUBSPACES (Mathematics) , *ARTIFICIAL intelligence - Abstract
Multi-view subspace learning has been aroused much concern recently. Although there exist a few multi-view subspace learning methods taking both the discrimination information and the correlation information into consideration, they always ignore the use of the inter-view discriminant information. In view of this, we propose an approach called multi-view local discrimination and canonical correlation analysis (MLDC 2 A) for image classification. MLDC 2 A aims to learn a common multi-view subspace from multi-view data, by making use of not only the discriminant information from both intra-view and inter-view but also the correlation information between paired view data. Furthermore, in the learned subspace, the local geometric structure of multi-view data is preserved. We conduct experiments on MNIST, COIL-20, Multi-PIE, Caltech-101, and COCO datasets and the results indicate the effectiveness of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
21. Uncorrelated multi-set feature learning for color face recognition.
- Author
-
Wu, Fei, Jing, Xiao-Yuan, Dong, Xiwei, Ge, Qi, Wu, Songsong, Liu, Qian, Yue, Dong, and Yang, Jing-Yu
- Subjects
- *
FEATURE selection , *FACE perception , *FEATURE extraction , *ORTHOGONAL functions , *DATABASES - Abstract
Most existing color face feature extraction methods need to perform color space transformation, and they reduce correlation of color components on the data level that has no direct connection with classification. Some methods extract features from R, G and B components serially with orthogonal constraints on the feature level, yet the serial extraction manner might make discriminabilities of features derived from three components distinctly different. Multi-set feature learning can jointly learn features from multiple sets of data effectively. In this paper, we propose two novel color face recognition approaches, namely multi-set statistical uncorrelated projection analysis (MSUPA) and multi-set discriminating uncorrelated projection analysis (MDUPA), which extract discriminant features from three color components together and simultaneously reduce the global statistical and global discriminating feature-level correlation between color components in a multi-set manner, respectively. Experiments on multiple public color face databases demonstrate that the proposed approaches outperform several related state-of-the-arts. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
22. Multi-spectral low-rank structured dictionary learning for face recognition.
- Author
-
Jing, Xiao-Yuan, Wu, Fei, Zhu, Xiaoke, Dong, Xiwei, Ma, Fei, and Li, Zhiqiang
- Subjects
- *
MACHINE learning , *HUMAN facial recognition software , *FEATURE extraction , *INFORMATION theory , *MATHEMATICAL regularization - Abstract
Multi-spectral face recognition has been attracting increasing interest. In the last decade, several multi-spectral face recognition methods have been presented. However, it has not been well studied that how to jointly learn effective features with favorable discriminability from multiple spectra even when multi-spectral face images are severely contaminated by noise. Multi-view dictionary learning is an effective feature learning technique, which learns dictionaries from multiple views of the same object and has achieved state-of-the-art classification results. In this paper, we for the first time introduce the multi-view dictionary learning technique into the field of multi-spectral face recognition and propose a multi-spectral low-rank structured dictionary learning (MLSDL) approach. It learns multiple structured dictionaries, including a spectrum-common dictionary and multiple spectrum-specific dictionaries, which can fully explore both the correlated information and the complementary information among multiple spectra. Each dictionary contains a set of class-specified sub-dictionaries. Based on the low-rank matrix recovery theory, we apply low-rank regularization in multi-spectral dictionary learning procedure such that MLSDL can well solve the problem of multi-spectral face recognition with high levels of noise. We also design the low-rank structural incoherence term for multi-spectral dictionary learning, so as to reduce the redundancy among multiple spectrum-specific dictionaries. In addition, to enhance the efficiency of classification procedure, we design a low-rank structured collaborative representation classification scheme for MLSDL. Experimental results on HK PolyU, CMU and UWA hyper-spectral face databases demonstrate the effectiveness of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
23. Multi-view low-rank dictionary learning for image classification.
- Author
-
Wu, Fei, Jing, Xiao-Yuan, You, Xinge, Yue, Dong, Hu, Ruimin, and Yang, Jing-Yu
- Subjects
- *
MACHINE learning , *MATHEMATICAL regularization , *PERFORMANCE evaluation , *DATA dictionaries , *IMAGE processing - Abstract
Recently, a multi-view dictionary learning (DL) technique has received much attention. Although some multi-view DL methods have been presented, they suffer from the problem of performance degeneration when large noise exists in multiple views. In this paper, we propose a novel multi-view DL approach named multi-view low-rank DL (MLDL) for image classification. Specifically, inspired by the low-rank matrix recovery theory, we provide a multi-view dictionary low-rank regularization term to solve the noise problem. We further design a structural incoherence constraint for multi-view DL, such that redundancy among dictionaries of different views can be reduced. In addition, to enhance efficiency of the classification procedure, we design a classification scheme for MLDL, which is based on the idea of collaborative representation based classification. We apply MLDL for face recognition, object classification and digit classification tasks. Experimental results demonstrate the effectiveness and efficiency of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
24. A novel face recognition approach based on kernel discriminative common vectors (KDCV) feature extraction and RBF neural network
- Author
-
Jing, Xiao-Yuan, Yao, Yong-Fang, Yang, Jing-Yu, and Zhang, David
- Published
- 2008
- Full Text
- View/download PDF
25. Face and palmprint feature level fusion for single sample biometrics recognition
- Author
-
Yao, Yong-Fang, Jing, Xiao-Yuan, and Wong, Hau-San
- Published
- 2007
- Full Text
- View/download PDF
26. Face recognition based on discriminant fractional Fourier feature extraction
- Author
-
Jing, Xiao-Yuan, Wong, Hau-San, and Zhang, David
- Published
- 2006
- Full Text
- View/download PDF
27. An uncorrelated fisherface approach
- Author
-
Jing, Xiao-Yuan, Wong, Hau-San, Zhang, David, and Tang, Yuan-Yan
- Published
- 2005
- Full Text
- View/download PDF
28. Face feature extraction and recognition based on discriminant subclass-center manifold preserving projection
- Author
-
Jing, Xiao-Yuan, Lan, Chao, Zhang, David, Yang, Jing-Yu, Li, Min, Li, Sheng, and Zhu, Song-Hao
- Subjects
- *
FEATURE extraction , *HUMAN facial recognition software , *DISCRIMINANT analysis , *MANIFOLDS (Mathematics) , *DIMENSIONAL reduction algorithms , *DATABASES , *CLASSIFICATION - Abstract
Abstract: Manifold learning is an effective dimensional reduction technique for face feature extraction, which, generally speaking, tends to preserve the local neighborhood structures of given samples. However, neighbors of a sample often comprise more inter-class data than intra-class data, which is an undesirable effect for classification. In this paper, we address this problem by proposing a subclass-center based manifold preserving projection (SMPP) approach, which aims at preserving the local neighborhood structure of subclass-centers instead of given samples. We theoretically show from a probability perspective that, neighbors of a subclass-center would comprise of more intra-class data than inter-class data, and thus is more desirable for classification. In order to take full advantage of the class separability, we further propose the discriminant SMPP (DSMPP) approach, which incorporates the subclass discriminant analysis (SDA) technique to SMPP. In contrast to related discriminant manifold learning methods, DSMPP is formulated as a dual-objective optimization problem and we present analytical solution to it. Experimental results on the public AR, FERET and CAS-PEAL face databases demonstrate that the proposed approaches are more effective than related manifold learning and discriminant manifold learning methods in classification performance. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
29. Improvements on the linear discrimination technique with application to face recognition
- Author
-
Jing, Xiao-Yuan, Zhang, David, and Yao, Yong-Fang
- Published
- 2003
- Full Text
- View/download PDF
30. Face and palmprint pixel level fusion and Kernel DCV-RBF classifier for small sample biometric recognition
- Author
-
Jing, Xiao-Yuan, Yao, Yong-Fang, Zhang, David, Yang, Jing-Yu, and Li, Miao
- Subjects
- *
ANTHROPOMETRY , *PHYSICAL & theoretical chemistry , *STATISTICAL correlation , *LINEAR free energy relationship - Abstract
Abstract: Recently, multi-modal biometric fusion techniques have attracted increasing atove the recognition performance in some difficult biometric problems. The small sample biometric recognition problem is such a research difficulty in real-world applications. So far, most research work on fusion techniques has been done at the highest fusion level, i.e. the decision level. In this paper, we propose a novel fusion approach at the lowest level, i.e. the image pixel level. We first combine two kinds of biometrics: the face feature, which is a representative of contactless biometric, and the palmprint feature, which is a typical contacting biometric. We perform the Gabor transform on face and palmprint images and combine them at the pixel level. The correlation analysis shows that there is very small correlation between their normalized Gabor-transformed images. This paper also presents a novel classifier, KDCV-RBF, to classify the fused biometric images. It extracts the image discriminative features using a Kernel discriminative common vectors (KDCV) approach and classifies the features by using the radial base function (RBF) network. As the test data, we take two largest public face databases (AR and FERET) and a large palmprint database. The experimental results demonstrate that the proposed biometric fusion recognition approach is a rather effective solution for the small sample recognition problem. [Copyright &y& Elsevier]
- Published
- 2007
- Full Text
- View/download PDF
31. Face recognition based on 2D Fisherface approach
- Author
-
Jing, Xiao-Yuan, Wong, Hau-San, and Zhang, David
- Subjects
- *
FACE perception , *DISCRIMINATION (Sociology) , *IMAGING systems , *FISHERS - Abstract
Abstract: Two-dimensional (2D) discrimination analysis using methods such as 2D PCA and Image LDA is of interest in face recognition because it extracts discriminative features faster than one-dimensional (1D) discrimination analysis. However, existing 2D methods generally use more discriminative features and take longer to test than 1D methods. 2D PCA in particular cannot make full use of the Fisher discriminant criterion. Image LDA also has drawbacks in that it cannot perform 2D principal component analysis and discards components with poor discriminative capabilities. In addition, existing 2D methods cannot provide an automatic strategy to choose 2D principal components or discriminant vectors. In this paper, we propose 2D Fisherface, a novel discrimination approach that combines the two-stage “” strategy and 2D discrimination techniques. It can extract face discriminative features by automatically selecting two-dimensional principal components and discriminant vectors. Using the AR database as the test data, it is shown that the proposed approach is faster and more effective than several representative 1D and 2D discrimination methods. [Copyright &y& Elsevier]
- Published
- 2006
- Full Text
- View/download PDF
32. A Fourier–LDA approach for image recognition
- Author
-
Jing, Xiao-Yuan, Tang, Yuan-Yan, and Zhang, David
- Subjects
- *
IMAGE processing , *IMAGING systems , *COMPUTER graphics , *CLASSIFICATION - Abstract
Abstract: Fourier transform and linear discrimination analysis (LDA) are two commonly used techniques of image processing and recognition. Based on them, we propose a Fourier–LDA approach (FLA) for image recognition. It selects appropriate Fourier frequency bands with favorable linear separability by using a two-dimensional separability judgment. Then it extracts two-dimensional linear discriminative features to perform the classification. Our experimental results on different image data prove that FLA obtains better classification performance than other linear discrimination methods. [Copyright &y& Elsevier]
- Published
- 2005
- Full Text
- View/download PDF
33. UODV: improved algorithm and generalized theory
- Author
-
Jing, Xiao-Yuan, Zhang, David, and Jin, Zhong
- Subjects
- *
IMAGE analysis , *IMAGING systems , *MULTIVARIATE analysis , *ALGORITHMS - Abstract
Uncorrelated optimal discrimination vectors (UODV) is an effective linear discrimination approach. However, this approach has the disadvantages in both the algorithm and the theory. In light of this, we propose an improved UODV algorithm based on the typical principal component analysis (TPCA), which can satisfy the statistical uncorrelation and utilize the total scatter information of the training samples. Then, a new and generalized theorem on UODV is presented. This generalized theorem reveals the essential relationship between UODV and the well-known Fisherface method, and proves that our improved UODV algorithm is theoretically superior to the Fisherface method. Experimental results on both 1-D and 2-D data prove that our algorithm outperforms the original UODV approach and the Fisherface method. [Copyright &y& Elsevier]
- Published
- 2003
- Full Text
- View/download PDF
34. Face recognition based on a group decision-making combination approach
- Author
-
Jing, Xiao-Yuan, Zhang, David, and Yang, Jing-Yu
- Subjects
- *
FACE perception , *IMAGE processing - Abstract
This paper proposes a novel and real-time classifiers combination approach, group decision-making combination (GDC) approach, which can dynamically select the classifiers and perform linear combination. We also prove that the orthogonal wavelet transform can be regarded as an effective image''s preprocessing tool adapted to classifiers combination. GDC has been successfully used for face recognition, which can improve on the recognition rate for the algebraic features. Experiment results also show that it is superior to the conventional combination method, majority voting method. [Copyright &y& Elsevier]
- Published
- 2003
- Full Text
- View/download PDF
35. mGlu2/3 receptor in the prelimbic cortex is implicated in stress resilience and vulnerability in mice.
- Author
-
Jing, Xiao-Yuan, Wang, Yan, Zou, Hua-Wei, Li, Zi-Lin, Liu, Ying-Juan, and Li, Lai-Fu
- Subjects
- *
GLUTAMATE receptors , *DRUG target , *PREFRONTAL cortex , *MICE , *ANXIETY , *DISASTER resilience , *MENTAL illness - Abstract
Resilience, referring to "achieving a positive outcome in the face of adversity", is a common phenomenon in daily life. Elucidating the mechanisms of stress resilience is instrumental to developing more effective treatments for stress-related psychiatric disorders such as depression. Metabotropic glutamate receptors (mGlu2/3 and mGlu5) within the medial prefrontal cortex (mPFC) have been recently recognized as promising therapeutic targets for rapid-acting antidepressant treatment. In this study, we assessed the functional roles of the mGlu2/3 and mGlu5 within different subregions of the mPFC in modulating stress resilience and vulnerability by using chronic social defeat stress (CSDS) paradigms in mice. Our results showed that approximately 51.6% of the subjects exhibited depression- or anxiety-like behaviors after exposure to CSDS. When a susceptible mouse was confronted with an attacker, c-Fos expression in the prelimbic cortex (PrL) subregion of the mPFC substantially increased. Compared with the resilient and control groups, the expression of mGlu2/3 was elevated in the PrL of the susceptible group. The expression of mGlu5 showed no significant difference among the three groups in the whole mPFC. Finally, we found that the social avoidance symptoms of the susceptible mice were rapidly relieved by intra-PrL administration of LY341495—an mGluR2/3 antagonists. The above results indicate that mGluR2/3 within the PrL may play an important regulatory role in stress-related psychiatric disorders. Our results are meaningful, as they expand our understanding of stress resilience and vulnerability which may open an avenue to develop novel, personalized approaches to mitigate depression and promote stress resilience. • The vulnerable rate is about 51.6% after exposure to chronic social defeat stress. • Susceptible mice showed elevated mGlu2/3 expression in the PrL. • mGluR2/3 antagonists rapidly relieved social avoidance symptoms of the susceptible mice. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
36. Spectrum-aware discriminative deep feature learning for multi-spectral face recognition.
- Author
-
Wu, Fei, Jing, Xiao-Yuan, Feng, Yujian, Ji, Yi-mu, and Wang, Ruchuan
- Subjects
- *
MULTISPECTRAL imaging , *DEEP learning , *HUMAN facial recognition software , *VISIBLE spectra - Abstract
• Deep metric learning technique is first introduced into the multi-spectral face recognition task. • The spectrum-aware embedding loss takes both the spectrum and class label information into consideration. • The multi-spectral discriminant correlation loss fully exploits the useful correlation information in multi-spectral images. • The proposed approach significantly outperforms state-of-the-art multi-spectral face recognition methods. One primary challenge of face recognition is that the performance is seriously affected by varying illumination. Multi-spectral imaging can capture face images in the visible spectrum and beyond, which is deemed to be an effective technology in response to this challenge. For current multi-spectral imaging-based face recognition methods, how to fully explore the discriminant and correlation features from both the intra-spectrum and inter-spectrum aspects with only a limited number of multi-spectral samples for model training has not been well studied. To address this problem, in this paper, we propose a novel face recognition approach named Spectrum-aware Discriminative Deep Learning (SDDL). To take full advantage of the multi-spectral training samples, we build a discriminative multi-spectral network (DMN) and take face sample pairs as the input of the network. By jointly considering the spectrum and the class label information, SDDL trains the network for projecting samples pairs into a discriminant feature subspace, on which the intrinsic relationship including the intra- and inter-spectrum discrimination and the inter-spectrum correlation among face samples is well discovered. The proposed approach is evaluated on three widely used datasets HK PolyU, CMU, and UWA. Extensive experimental results demonstrate the superiority of SDDL over state-of-the-art competing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
37. Modality-specific and shared generative adversarial network for cross-modal retrieval.
- Author
-
Wu, Fei, Jing, Xiao-Yuan, Wu, Zhiyong, Ji, Yimu, Dong, Xiwei, Luo, Xiaokai, Huang, Qinghua, and Wang, Ruchuan
- Subjects
- *
LABELS , *WAREHOUSE automation , *MODAL logic - Abstract
• We propose a Modality-Specific and Shared Generative Adversarial Network approach. • The modality-specific and modality-shared features are jointly explored and leveraged. • The inter-modal invariance and the inter- and intra-modal discrimination is well modeled. • Superiority of our approach is demonstrated on multiple benchmark multi-modal datasets. Cross-modal retrieval aims to realize accurate and flexible retrieval across different modalities of data, e.g., image and text, which has achieved significant progress in recent years, especially since generative adversarial networks (GAN) were used. However, there still exists much room for improvement. How to jointly extract and utilize both the modality-specific (complementarity) and modality-shared (correlation) features effectively has not been well studied. In this paper, we propose an approach named Modality-Specific and Shared Generative Adversarial Network (MS2GAN) for cross-modal retrieval. The network architecture consists of two sub-networks that aim to learn modality-specific features for each modality, followed by a common sub-network that aims to learn the modality-shared features for each modality. Network training is guided by the adversarial scheme between the generative and discriminative models. The generative model learns to predict the semantic labels of features, model the inter- and intra-modal similarity with label information, and ensure the difference between the modality-specific and modality-shared features, while the discriminative model learns to classify the modality of features. The learned modality-specific and shared feature representations are jointly used for retrieval. Experiments on three widely used benchmark multi-modal datasets demonstrate that MS2GAN can outperform state-of-the-art related works. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
38. Improvements on the uncorrelated optimal discriminant vectors
- Author
-
Jing, Xiao-Yuan, Zhang, David, and Jin, Zhong
- Subjects
- *
RECOGNITION (Psychology) , *ALGORITHMS - Abstract
The algorithm and the theorem of uncorrelated optimal discriminant vectors (UODV) were proposed by Jin. In this paper, we present new improvements to Jin''s method, which include an improved approach for the original algorithm and a generalized theorem for UODV. Experimental results prove that our approach is superior to the original in the recognition rate. [Copyright &y& Elsevier]
- Published
- 2003
- Full Text
- View/download PDF
39. Unequal adaptive visual recognition by learning from multi-modal data.
- Author
-
Cai, Ziyun, Zhang, Tengfei, Jing, Xiao-Yuan, and Shao, Ling
- Subjects
- *
VISUAL learning - Abstract
• It is an early work to explore unequal category level across RGB-D domains. • The proposed method can handle challenging unequal category scenario. • Experiments show that the proposed method outperforms state-of- the-art approaches. Conventional domain adaptation tries to leverage knowledge obtained from the single source domain to recognize the data in the target domain, where only one modality exists in the source domain. This neglects the scenario that source domain can be acquired from multi-modal data, such as RGB data and depth data. In addition, conventional domain adaptation approaches generally assume source and target domains have the identical number of categories, which is quite restrict for real-world applications. In practice, the number of categories in the target domain is often less than that in the source domain. In this work, we focus on a more practical and challenging task that recognizes RGB data by learning from RGB-D data under an unequal label scenario, which suffers from three challenges: i) the addition of depth information, ii) the domain mismatch problem and iii) the negative transfer caused by unequal label numbers. Our main contribution is a novel method, referred to as unequal Distribution Visual-Depth Adaption (uDVDA), which takes advantage of depth data and handles domain mismatch problem under label inequality, simultaneously. Experiments show that uDVDA outperforms state-of-the-art models on different datasets, especially under unequal label scenario. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Dual multi-kernel discriminant analysis for color face recognition.
- Author
-
Liu, Qian, Wang, Chao, and Jing, Xiao-yuan
- Subjects
- *
DISCRIMINANT analysis , *COMPUTER vision , *MACHINE learning , *HUMAN facial recognition software , *FEATURE extraction - Abstract
With the increasing use of color images in the fields of pattern recognition, computer vision and machine learning, color face recognition technique becomes important, whose key problem is how to make full use of the color information and extract effective discriminating features. In this paper, we propose a novel nonlinear feature extraction approach for color face recognition, named dual multi-kernel discriminant analysis (DMDA), where we design a kernel selection strategy to select the optimal kernel mapping function for each color component of face images, further design a color space selection strategy to choose the most suitable space, then separately map different color components of face images into different high-dimensional kernel spaces, and finally perform multi-kernel learning and discriminant analysis not only within each component but also between different components. Experimental results in the public face recognition grand challenge (FRGC) version 2 and labeled faces in the wilds (LFW) databases illustrate that our approach outperforms several representative color face recognition methods. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
41. Learning robust and discriminative low-rank representations for face recognition with occlusion.
- Author
-
Gao, Guangwei, Yang, Jian, Jing, Xiao-Yuan, Shen, Fumin, Yang, Wankou, and Yue, Dong
- Subjects
- *
HUMAN facial recognition software , *MACHINE learning , *DISCRIMINANT analysis , *ROBUST statistics , *IMAGE analysis - Abstract
For robust face recognition tasks, we particularly focus on the ubiquitous scenarios where both training and testing images are corrupted due to occlusions. Previous low-rank based methods stacked each error image into a vector and then used L 1 or L 2 norm to measure the error matrix. However, in the stacking step, the structure information of the error image can be lost. Depart from the previous methods, in this paper, we propose a novel method by exploiting the low-rankness of both the data representation and each occlusion-induced error image simultaneously, by which the global structure of data together with the error images can be well captured. In order to learn more discriminative low-rank representations, we formulate our objective such that the learned representations are optimal for classification with the available supervised information and close to an ideal-code regularization term. With strong structure information preserving and discrimination capabilities, the learned robust and discriminative low-rank representation (RDLRR) works very well on face recognition problems, especially with face images corrupted by continuous occlusions. Together with a simple linear classifier, the proposed approach is shown to outperform several other state-of-the-art face recognition methods on databases with a variety of face variations. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
42. Cross-view panorama image synthesis with progressive attention GANs.
- Author
-
Wu, Songsong, Tang, Hao, Jing, Xiao-Yuan, Qian, Jianjun, Sebe, Nicu, Yan, Yan, and Zhang, Qinghua
- Subjects
- *
GENERATIVE adversarial networks , *SUBURBS , *PANORAMAS , *DATA augmentation , *IMAGE segmentation - Abstract
• A progressive GAN generation framework based on GANs is proposed to generate highresolution ground-view panorama images solely from low-resolution aerial images. • A novel cross-stage attention module is proposed to bridge adjacent generation stages of the progressive generation process so that the quality of synthesized panorama image could be continually improved. • A novel orientation-aware data augmentation strategy is proposed to utilize geometric relation between aerial and segmentation images for model training. • The proposed model establishes new state-of-the-art results for the task of cross-view panorama scene image synthesis in two scenarios: suburb area and urban area. Despite the significant progress of conditional image generation, it remains difficult to synthesize a ground-view panorama image from a top-view aerial image. Among the core challenges are the vast differences in image appearance and resolution between aerial images and panorama images, and the limited aside information available for top-to-ground viewpoint transformation. To address these challenges, we propose a new Progressive Attention Generative Adversarial Network (PAGAN) with two novel components: a multistage progressive generation framework and a cross-stage attention module. In the first stage, an aerial image is fed into a U-Net-like network to generate one local region of the panorama image and its corresponding segmentation map. Then, the synthetic panorama image region is extended and refined through the following generation stages with our proposed cross-stage attention module that passes semantic information forward stage-by-stage. In each of the successive generation stages, the synthetic panorama image and segmentation map are separately fed into an image discriminator and a segmentation discriminator to compute both later real and fake, as well as feature alignment score maps for discrimination. The model is trained with a novel orientation-aware data augmentation strategy based on the geometric relation between aerial and panorama images. Extensive experimental results on two cross-view datasets show that PAGAN generates high-quality panorama images with more convincing details than state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
43. Adaptive graph convolutional collaboration networks for semi-supervised classification.
- Author
-
Fu, Sichao, Wang, Senlin, Liu, Weifeng, Liu, Baodi, Zhou, Bin, You, Xinhua, Peng, Qinmu, and Jing, Xiao-Yuan
- Subjects
- *
CLASSIFICATION , *ELECTRONIC data processing , *NEIGHBORHOODS , *DEEP learning , *PROBLEM solving , *MULTIPLE criteria decision making - Abstract
Graph convolution networks (GCNs) have achieved remarkable success in processing non-Euclidean data. GCNs update the feature representations of each sample by aggregating the structure information from K -order (layer) neighborhood samples. Existing GCNs variants rely heavily on the K -th layer semantic information with K -order neighborhood information aggregating. However, semantic features from different convolution layers have distinct sample attributes. The single-layer semantic feature is only a one-sided feature representation. Besides, the semantic features of traditional GCNs will be oversmoothing with multi-layer structure information aggregates. In this paper, to solve the above-mentioned problem, we propose adaptive graph convolutional collaboration networks (AGCCNs) for the semi-supervised classification task. AGCCNs can fully use the different scales of discrimination information contained in the different convolutional layers. Specifically, AGCCNs utilize the attention mechanism to learn the relevance (contribution) coefficient of the deep semantic features from different convolution layers for the task, which aims to effectively discriminant their importance. After multiple optimizations, AGCCNs can adaptively learn the robust deep semantic features via the effective semantic fusion process between multi-layer semantic information. Compared with GCNs that only utilize the K -th layer semantic features, AGCCNs make the learned deep semantic features contain richer and more robust semantic information. What is more, our proposed AGCCNs can aggregate the appropriate K -order neighborhood information for each sample, which can relieve the oversmoothing issue of traditional GCNs and better generalize shallow GCNs to more deep layers. Abundant experimental results on several popular datasets demonstrate the superiority of our proposed AGCCNs compared with traditional GCNs. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. Attention Cycle-consistent universal network for More Universal Domain Adaptation.
- Author
-
Cai, Ziyun, Huang, Yawen, Zhang, Tengfei, Jing, Xiao-Yuan, Zheng, Yefeng, and Shao, Ling
- Abstract
Existing Universal Domain Adaptation (UniDA) approaches can handle various domain adaptation (DA) tasks, which need no prior information about the category overlap across target and source domains. However, traditional UniDA scenario cannot fully cover every DA scenario, e.g. , Multi-Source DA is absent. Therefore, aiming to simultaneously handle more DA scenarios in nature, we propose the More Universal Domain Adaptation (MUniDA) task. There are three challenges in MUniDA: (i) Category shift between source and target domains; (ii) Domain shift, especially the domain shift among multiple modalities in the source, which is ignored by the current UniDA approaches; (iii) How to recognize common categories across domains? We propose a more universally applicable DA approach that can tackle above challenges without any modification called A ttention Cycle -consistent U niversal N etwork (A-CycleUN). We show through extensive experiments on several benchmarks that A-CycleUN works stably and outperforms baselines across different MUniDA settings. • We introduce a more practical setting called More Universal Domain Adaptation. • A novel end-to-end framework Attention Cycle-consistent Universal Network is proposed. • The proposed method can achieve state-of-the-art performance under the new setting. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. The GABA(B1) receptor within the infralimbic cortex is implicated in stress resilience and vulnerability in mice.
- Author
-
Zou, Hua-Wei, Li, Zi-Lin, Jing, Xiao-Yuan, Wang, Yan, Liu, Ying-Juan, and Li, Lai-Fu
- Subjects
- *
GABA , *PSYCHOLOGICAL stress , *DESPAIR , *PREFRONTAL cortex , *PHYSICAL mobility , *MICE - Abstract
• The vulnerable rate is about 41.9 % after exposed to chronic unpredictable stress. • Susceptible mice showed elevated GABA(B1) expression in the IL. • Intra-IL injection of baclofen rapidly relieved social avoidance symptoms of the susceptible mice. Resilience is the capacity to maintain normal psychological and physical functions in the face of stress and adversity. Understanding how one can develop and enhance resilience is of great relevance to not only promoting coping mechanisms but also mitigating maladaptive stress responses in psychiatric illnesses such as depression. Preclinical studies suggest that GABA(B) receptors (GABA(B1) and GABA(B2)) are potential targets for the treatment of major depression. In this study, we assessed the functional role of GABA(B) receptors in stress resilience and vulnerability by using a chronic unpredictable stress (CUS) model in mice. As the medial prefrontal cortex (mPFC) plays a key role in the top-down modulation of stress responses, we focused our study on this brain structure. Our results showed that only approximately 41.9% of subjects exhibited anxiety- or despair-like behaviors after exposure to CUS. The vulnerable mice showed higher c-Fos expression in the infralimbic cortex (IL) subregion of the mPFC when exposed to a social stressor. Moreover, the expression of GABA(B1) but not GABA(B2) receptors was significantly downregulated in IL subregion of susceptible mice. Finally, we found that intra-IL administration of baclofen, a GABA(B) receptor agonist, rapidly relieved the social avoidance symptoms of the "stress-susceptible" mice. Taken together, our results show that the GABA(B1) receptor within the IL may play an important role in stress resilience and vulnerability, and thus open an avenue to develop novel, personalized approaches to promote stress resilience and treat stress-related psychiatric disorders. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
46. Dual-regression model for visual tracking.
- Author
-
Li, Xin, Liu, Qiao, Fan, Nana, Zhou, Zikun, He, Zhenyu, and Jing, Xiao-yuan
- Subjects
- *
ARTIFICIAL satellite tracking , *ALGORITHMS - Abstract
Existing regression based tracking methods built on correlation filter model or convolution model do not take both accuracy and robustness into account at the same time. In this paper, we propose a dual-regression framework comprising a discriminative fully convolutional module and a fine-grained correlation filter component for visual tracking. The convolutional module trained in a classification manner with hard negative mining ensures the discriminative ability of the proposed tracker, which facilitates the handling of several challenging problems, such as drastic deformation, distractors, and complicated backgrounds. The correlation filter component built on the shallow features with fine-grained features enables accurate localization. By fusing these two branches in a coarse-to-fine manner, the proposed dual-regression tracking framework achieves a robust and accurate tracking performance. Extensive experiments on the OTB2013, OTB2015, and VOT2015 datasets demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
47. Multi-view common component discriminant analysis for cross-view classification.
- Author
-
You, Xinge, Xu, Jiamiao, Yuan, Wei, Jing, Xiao-Yuan, Tao, Dacheng, and Zhang, Taiping
- Subjects
- *
INFORMATION commons , *COMPUTER vision , *CLASSIFICATION , *DISCRIMINANT analysis , *MATHEMATICAL regularization - Abstract
Highlights • We extract view-independent features to remove the view discrepancy. • We integrate discriminant regularization to learn a discriminant subspace. • We extend single-view local geometry preservation to multi-view scenario. • We integrate local consistency regularization to learn a structured subspace. Abstract Cross-view classification that means to classify samples from heterogeneous views is a significant yet challenging problem in computer vision. An effective solution to this problem is the multi-view subspace learning (MvSL), which intends to find a common subspace for multi-view data. Although great progress has been made, existing methods usually fail to find a suitable subspace when multi-view data lies on nonlinear manifolds, thus leading to performance deterioration. To circumvent this drawback, we propose Multi-view Common Component Discriminant Analysis (MvCCDA) to handle view discrepancy, discriminability and nonlinearity in a joint manner. Specifically, our MvCCDA incorporates supervised information and local geometric information into the common component extraction process to learn a discriminant common subspace and to discover the nonlinear structure embedded in multi-view data. Optimization and complexity analysis of MvCCDA are also presented for completeness. Our MvCCDA is competitive with the state-of-the-art MvSL based methods on four benchmark datasets, demonstrating its superiority. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
48. Multi-orientation and multi-scale features discriminant learning for palmprint recognition.
- Author
-
Ma, Fei, Zhu, Xiaoke, Wang, Cailing, Liu, Huajun, and Jing, Xiao-Yuan
- Subjects
- *
PALMPRINT recognition , *HAMMING distance , *HAMMING codes , *IMAGE databases , *DISCRIMINANT analysis , *DESCRIPTOR systems - Abstract
Palmprint contains stable and effective features, especially the orientation and scale features of palm lines, and has now become an important identity recognition technique for surveillance and safety applications. Existing palmprint recognition methods using texture features can be roughly divided into two categories: coding-based and local descriptor based methods. As compared with the latter category, the former one can make full use of the palmprint specific features and acquire fast matching speed. However, most existing coding-based methods are based on the competitive coding scheme, in which the scale features of palmprint cannot be well exploited. In this work, we propose a discriminant orientation and scale features learning (DOSFL) for palmprint recognition. By introducing the idea of discriminant analysis into palmprint coding, DOSFL can extract the orientation and scale features with more favorable discriminability. Then, DOSFL utilizes four code bits to represent both the orientation and scale features of palmprint, and employs the Hamming distance for code matching. To make better use of the orientation and scale information contained in palmprint samples, we further propose a multi-orientation and multi-scale features discriminant learning (MOSDL) approach for palmprint recognition, which can fuse different orientation and scale feature information effectively in the discriminant learning process. Experimental results on two publicly available palmprint databases, including the HK PolyU database and UST palmprint image database, demonstrate that our proposed approach can achieve better recognition results than the compared methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
49. Multi-view coupled dictionary learning for person re-identification.
- Author
-
Ma, Fei, Zhu, Xiaoke, Liu, Qinglong, Song, Chengfang, Jing, Xiao-Yuan, and Ye, Dengpan
- Subjects
- *
FEATURE selection , *BLENDED learning - Abstract
In recent years, person re-identification is becoming an important technique, which can be applied in computer vision, pedestrian tracking and intelligent monitoring. Due to the large variations of visual appearance caused by view angle, pose changing, light changing, background clutter and occlusion, person re-identification is very challenging. In practice, there exist large differences among different types of features and among different cameras. To improve the favorable representation of different features, we propose a multi-view based coupled dictionary pair learning framework, which can learn dictionary pairs for multiple categories of features, e.g., the color features, texture features and hybrid features etc. Specifically, with the learned color feature dictionary pair, we can obtain the color feature representation coefficients of each person from different cameras. The texture feature dictionary pair seeks to learn the texture feature representation coefficients of each person from both cameras. The hybrid feature dictionary pair aims to learn the hybrid feature coefficients for each person. The learned coupled dictionary pairs can demonstrate the intrinsic relationship of different cameras and different types of features. When the resolution of image is too low, the texture information will be lost to some extent. There is few high-resolution person dataset so far, we contribute a newly collected dataset, named High-Resolution Pedestrian re-Identification Dataset (HRPID) on the campus of Wuhan University. The size of person images is normalized to 230 × 560 pixels, which is bigger than existing person re-identification datasets. Experimental results on a new dataset and two public pedestrian datasets demonstrate that our proposed approach can perform better than the other competing methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. Involvement of oxytocin and GABA in consolation behavior elicited by socially defeated individuals in mandarin voles.
- Author
-
Li, Lai-Fu, Yuan, Wei, He, Zhi-Xiong, Wang, Li-Min, Jing, Xiao-Yuan, Zhang, Jing, Yang, Yang, Guo, Qian-Qian, Zhang, Xue-Ni, Cai, Wen-Qi, Hou, Wen-Juan, Jia, Rui, and Tai, Fa-Dao
- Subjects
- *
GABA , *VOLES , *BEHAVIOR , *PREOPTIC area , *CONSOLATION , *CATATONIA - Abstract
Highlights • "Observers" greatly increased grooming toward their defeated partners. • Fos expressions were elevated in some brain structures. • OT and GABA neurons were activated in the PVN and ACC, respectively. • Consolation was blocked by an OT or a GABA A receptor antagonist within the ACC. Abstract Consolation, which entails comforting contact directed toward a distressed party, is a common empathetic response in humans and other species with advanced cognition. Here, using the social defeat paradigm, we provide empirical evidence that highly social and monogamous mandarin voles (Microtus mandarinus) increased grooming toward a socially defeated partner but not toward a partner who underwent only separation. This selective behavioral response existed in both males and females. Accompanied with these behavioral changes, c-Fos expression was elevated in many of the brain regions relevant for emotional processing, such as the anterior cingulate cortex (ACC), bed nucleus of the stria terminalis, paraventricular nucleus (PVN), basal/basolateral and central nucleus of the amygdala, and lateral habenular nucleus in both sexes; in the medial preoptic area, the increase in c-Fos expression was found only in females, whereas in the medial nucleus of the amygdala, this increase was found only in males. In particular, the GAD67/c-Fos and oxytocin (OT)/c-Fos colocalization rates were elevated in the ACC and PVN, indicating selective activation of GABA and OT neurons in these regions. The "stressed" pairs matched their anxiety-like behaviors in the open-field test, and their plasma corticosterone levels correlated well with each other, suggesting an empathy-based mechanism. This partner-directed grooming was blocked by pretreatment with an OT receptor antagonist or a GABA A receptor antagonist in the ACC but not by a V1a subtype vasopressin receptor antagonist. We conclude that consolation behavior can be elicited by the social defeat paradigm in mandarin voles, and this behavior may be involved in a coordinated network of emotion-related brain structures, which differs slightly between the sexes. We also found that the endogenous OT and the GABA systems within the ACC are essential for consolation behavior in mandarin voles. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.