9 results on '"Chen, Jingying"'
Search Results
2. Facial Expression Recognition Using Cascaded Random Forest Based on Local Features
- Author
-
Tuo, Mingjian, Chen, Jingying, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, and Satoh, Shin'ichi, editor
- Published
- 2018
- Full Text
- View/download PDF
3. Semi-supervised Learning of Deep Difference Features for Facial Expression Recognition
- Author
-
Xu, Can, Xu, Ruyi, Chen, Jingying, Liu, Leyuan, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Lai, Jian-Huang, editor, Liu, Cheng-Lin, editor, Chen, Xilin, editor, Zhou, Jie, editor, Tan, Tieniu, editor, Zheng, Nanning, editor, and Zha, Hongbin, editor
- Published
- 2018
- Full Text
- View/download PDF
4. Toward Children's Empathy Ability Analysis: Joint Facial Expression Recognition and Intensity Estimation Using Label Distribution Learning.
- Author
-
Chen, Jingying, Guo, Chen, Xu, Ruyi, Zhang, Kun, Yang, Zongkai, and Liu, Honghai
- Abstract
Empathy ability is one of the most important social communication skills in early childhood development. To analyze the children's empathy ability, facial expression analysis (FEA) is an effective way due to its ability to understand children's emotional states. Previous works mainly focus on recognizing the facial expression categories yet fail to estimate expression intensity, the latter of which is more important for fine-grained emotion analysis. To this end, this article first proposes to analyze children's empathy ability with both the categories and the intensities of facial expressions. A novel FEA method based on intensity label distribution learning is presented, which aims to recognize expression categories and estimate their intensity levels in an end-to-end framework. First, the intensity label distribution is generated for each frame in the expression sequence using a linear interpolation estimation and a Gaussian function to address the lack of reasonable annotations for expression intensity. Then, the extended intensity label distribution is presented to automatically encode the expression intensity in a multidimensional expression space, which aims to integrate the expression recognition and intensity estimation into a unified framework as well as boost the expression recognition performance by suppressing the variations in appearance caused by intensity and by emphasizing those variations among weak expressions. Finally, a Siamese-like convolutional neural network is presented to learn the expression model from a pair of frames that includes an expressive frame and its corresponding neutral frame using the extended intensity label distribution as the supervised information, thus effectively eliminating the expression-unrelated information's influence on FEA. Numerous experiments validate that the proposed method is promising in analysis of the differences in empathy ability between typically developing children and children with autism spectrum disorder. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
5. Dual subspace manifold learning based on GCN for intensity-invariant facial expression recognition.
- Author
-
Chen, Jingying, Shi, Jinxin, and Xu, Ruyi
- Subjects
- *
FACIAL expression , *EMOTIONS , *SUPERVISED learning , *COMPUTER vision - Abstract
Facial expression recognition (FER) is one of the most important computer vision tasks for understanding human inner emotions. However, the poor generation ability of the FER model limits its applicability due to tremendous intraclass variation. Especially for expressions of varying intensities, the appearance differences among weak expressions are subtle, which makes FER tasks challenging. In response to these issues, this paper presents a dual subspace manifold learning method based on a graph convolutional network (GCN) for intensity-invariant FER tasks. Our method treats the target task as a node classification problem and learns the manifold representation using two subspace analysis methods: locality preserving projection (LPP) and peak-piloted locality preserving projection (PLPP). Inspired by the classic LPP, which maintains local similarity among data, this paper introduces a novel PLPP that maintains the locality between peak expressions and non-peak expressions to enhance the representation of weak expressions. This paper also reports two subspace fusion methods, one based on a weighted adjacency matrix and another on a self-attention mechanism, that combine the LPP and PLPP to further improve FER performance. The second method achieves a recognition accuracy of 93.83% on the CK+, 74.86% on the Oulu-CASIA and 75.37% on the MMI for weak expressions, outperforming state-of-the-art methods. • A semi-supervised learning framework based on a GCN for intensity-invariant FER tasks. • A novel PLPP method to keep the locality between peak and non-peak expressions. • Two different subspace fusion methods that combine the LPP and PLPP results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Automatic social signal analysis: Facial expression recognition using difference convolution neural network.
- Author
-
Chen, Jingying, Lv, Yongqiang, Xu, Ruyi, and Xu, Can
- Abstract
Facial expression is one of the most powerful social signals for human beings to convey emotion and intention, hence automatic facial expression recognition (FER) has wide applications in human–computer interaction and affective computing, it has attracted an increasing attention recently. Researches in this field have made great progress especially with the development of deep learning method. However, FER remains a challenging task due to individual differences. To address the issue, we propose a two-stage framework based on Difference Convolution Neural Network (DCNN) inspired by the facial expression's nonstationary nature. In the first stage, the neutral expression frame and fully expression frame are automatically picked out from the facial expression sequences using a binary Convolution Neural Network (CNN). Then in the second stage, an end-to-end DCNN is proposed to classify the six basic facial expressions using the difference information between the neutral expression frame and the fully expression frame. Experiments have been conducted on the CK+ and BU-4DFE datasets, and the results show that the proposed framework delivers a promising performance (95.4% on the CK+ dataset and 77.4% on the BU-4DFE). Moreover, the proposed method is also successfully applied to analyze the student's affective state in an E-learning environment which suggests that it has strong potential to analyze nonstationary social signals. • The neutral and the fully expression are picked out from the facial expression sequence by a binary CNN. • An end-to-end DCNN learns the difference between the neutral and the fully expression for automatic FER. • Our method obtains competitive or even better performance on 2 benchmark databases. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
7. Facial expression recognition boosted by soft label with a diverse ensemble.
- Author
-
Gan, Yanling, Chen, Jingying, and Xu, Luhui
- Subjects
- *
FACIAL expression , *HUMAN facial recognition software , *LABELS , *HUMAN-computer interaction , *MODEL railroads - Abstract
• Constructed soft labels describe the natural correlation among expressions. • Label-level perturbation strategy contributes to increasing the diversity of the base classifiers. • Our method obtains competitive or even better performance on 3 benchmark databases. Facial expression recognition (FER) has recently attracted increasing attention with its growing applications in human-computer interaction and other fields. But a well-performing convolutional neural network (CNN) model learned using hard label/single-emotion label supervision may not obtain optimal performance in real-life applications because captured facial images usually exhibit expression as a mixture of multiple emotions instead of a single emotion. To address this problem, this paper presents a novel FER framework using a CNN and soft label that associates multiple emotions with each expression. In this framework, the soft label is obtained using a proposed constructor, which mainly involves two steps: (1) training a CNN model on a training set using hard label supervision; (2) fusing the latent label probability distribution predicted by the trained model to obtain soft labels. To improve the generalization performance of the ensemble classifier, we propose a novel label-level perturbation strategy to train multiple base classifiers with diversity. Experiments have been carried out on 3 publicly available databases: FER-2013, SFEW and RAF. The results indicate that our method achieves competitive or even better performance (FER-2013: 73.73%, SFEW: 55.73%, RAF: 86.31%) compared to state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. Deep peak-neutral difference feature for facial expression recognition.
- Author
-
Chen, Jingying, Xu, Ruyi, and Liu, Leyuan
- Subjects
FACIAL expression ,ARTIFICIAL neural networks ,HUMAN facial recognition software ,DEEP learning ,COMPUTER vision - Abstract
Facial expression recognition (FER) is important in vision-related applications. Deep neural networks demonstrate impressive performance for face recognition; however, it should be noted that this method relies heavily on a great deal of manually labeled training data, which is not available for facial expressions in real-world applications. Hence, we propose a powerful facial feature called deep peak-neutral difference (DPND) for FER. DPND is defined as the difference between two deep representations of the fully expressive (peak) and neutral facial expression frames. The difference tends to emphasize the facial parts that are changed in the transition from the neutral to the expressive face and to eliminate the face identity information retained in the fine-tuned deep neural network for facial expression, the network has been trained on large-scale face recognition dataset. Furthermore, unsupervised clustering and semi-supervised classification methods are presented to automatically acquire the neutral and peak frames from the expression sequence. The proposed facial expression feature achieved encouraging results on public databases, which suggests that it has strong potential to recognize facial expressions in real-world applications. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
9. Self-Difference Convolutional Neural Network for Facial Expression Recognition.
- Author
-
Liu, Leyuan, Jiang, Rubin, Huo, Jiao, Chen, Jingying, and Woźniak, Marcin
- Subjects
FACIAL expression ,CONVOLUTIONAL neural networks ,GENERATIVE adversarial networks ,SELF-efficacy - Abstract
Facial expression recognition (FER) is a challenging problem due to the intra-class variation caused by subject identities. In this paper, a self-difference convolutional network (SD-CNN) is proposed to address the intra-class variation issue in FER. First, the SD-CNN uses a conditional generative adversarial network to generate the six typical facial expressions for the same subject in the testing image. Second, six compact and light-weighted difference-based CNNs, called DiffNets, are designed for classifying facial expressions. Each DiffNet extracts a pair of deep features from the testing image and one of the six synthesized expression images, and compares the difference between the deep feature pair. In this way, any potential facial expression in the testing image has an opportunity to be compared with the synthesized "Self"—an image of the same subject with the same facial expression as the testing image. As most of the self-difference features of the images with the same facial expression gather tightly in the feature space, the intra-class variation issue is significantly alleviated. The proposed SD-CNN is extensively evaluated on two widely-used facial expression datasets: CK+ and Oulu-CASIA. Experimental results demonstrate that the SD-CNN achieves state-of-the-art performance with accuracies of 99.7% on CK+ and 91.3% on Oulu-CASIA, respectively. Moreover, the model size of the online processing part of the SD-CNN is only 9.54 MB (1.59 MB × 6 ), which enables the SD-CNN to run on low-cost hardware. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.