46 results
Search Results
2. An English video teaching classroom attention evaluation model incorporating multimodal information.
- Author
-
Miao, Qin, Li, Lemin, and Wu, Dongming
- Abstract
In order to solve the problem of low detection efficiency and long working time in the traditional video surveillance system for abnormal behavior detection and identification methods. A multimodal abnormal behavior detection and identification method based on video surveillance is proposed and applied to an online video classroom concentration evaluation task for college students in English. The model works by capturing abnormal behaviors and facial expressions and building a joint network that fuses abnormal behaviors and facial expressions. By testing on two open-source datasets and self-built classroom real-time datasets, the results verify that the model in this paper has better recognition performance compared to current mainstream models while maintaining real-time performance. The model proposed in this paper provides a new way of thinking about building smart classrooms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Design and Development of a Tracking System for Missing Persons Using ML Algorithms.
- Author
-
Mohamed Fahad, S. and Nirmala Sugirtha Rajini, S.
- Abstract
In recent years, a rise in missing person cases has posed challenges for law enforcement. This paper explores the various issues surrounding many unresolved cases and aims to uncover the contributing factors. Using a detailed analysis of police records, the study identifies patterns and challenges in resolving missing person cases. By understanding these dynamics, law enforcement agencies can refine strategies to enhance the likelihood of resolution, emphasizing the critical need for effective measures in addressing the growing issue of missing persons. Additionally, the paper proposes an innovative approach to locating missing persons using machine learning (ML) algorithms, specifically support vector machine (SVM) and K-nearest neighbors (KNN). Utilizing facial expressions as the basis for model training, the system swiftly and accurately identifies known individuals. The system, fed with a missing person dataset from Kaggle, outputs the person's identity based on features like gender, age and location. The results are then communicated to the police for further investigation. This streamlined approach enhances the efficiency of the search and identification process, contributing to more effective resolutions of missing person cases. The proposed system serves as a valuable tool for law enforcement in expediting investigations and addressing the critical issue of missing persons in a timely and efficient manner. [ABSTRACT FROM AUTHOR]
- Published
- 2024
4. A Review of Automatic Pain Assessment from Facial Information Using Machine Learning.
- Author
-
Ben Aoun, Najib
- Subjects
PAIN measurement ,MACHINE learning ,FACIAL pain ,FACIAL expression - Abstract
Pain assessment has become an important component in modern healthcare systems. It aids medical professionals in patient diagnosis and providing the appropriate care and therapy. Conventionally, patients are asked to provide their pain level verbally. However, this subjective method is generally inaccurate, not possible for non-communicative people, can be affected by physiological and environmental factors and is time-consuming, which renders it inefficient in healthcare settings. So, there has been a growing need to build objective, reliable and automatic pain assessment alternatives. In fact, due to the efficiency of facial expressions as pain biomarkers that accurately expand the pain intensity and the power of machine learning methods to effectively learn the subtle nuances of pain expressions and accurately predict pain intensity, automatic pain assessment methods have evolved rapidly. This paper reviews recent spatial facial expressions and machine learning-based pain assessment methods. Moreover, we highlight the pain intensity scales, datasets and method performance evaluation criteria. In addition, these methods' contributions, strengths and limitations will be reported and discussed. Additionally, the review lays the groundwork for further study and improvement for more accurate automatic pain assessment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Machine learning and deep learning techniques for driver fatigue and drowsiness detection: a review.
- Author
-
El-Nabi, Samy Abd, El-Shafai, Walid, El-Rabaie, El-Sayed M., Ramadan, Khalil F., Abd El-Samie, Fathi E., and Mohsen, Saeed
- Abstract
There are several factors for vehicle accidents during driving such as drivers' negligence, drowsiness, and fatigue. These accidents can be avoided, if drivers are warned in time. Moreover, recent developments in computer vision and artificial intelligence (AI) have helped to monitor drivers and alert them in case they are not concentrating on driving. The AI techniques can extract relevant features from expressions of driver's face, such as eye closure, yawning, and head movements to infer the level of sleepiness. In addition, they can acquire biological signals from the driver's body, and indications from the vehicle behavior. This paper provides a comprehensive review of the detection techniques of drowsiness and fatigue of drivers using machine learning (ML) and deep learning (DL). The current techniques for this application are classified into four categories: image- or video-based analysis during the driving, biological signal analysis for drivers, vehicle movement analysis, and hybrid techniques. A review of supervised techniques is presented for detecting fatigue and drowsiness on different datasets, with a comparison of the various techniques in terms of pros and cons. Results are presented in terms of accuracy of detection for each technique. The results are discussed according to the recent problems and challenges in this field. The paper also highlights the applicability and reliability of the different techniques. Furthermore, some suggestions are presented for the future work in the field of driver drowsiness detection (DDD). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Happy facial expressions and mouse pointing enhance EFL vocabulary learning from instructional videos.
- Author
-
Pi, Zhongling, Huang, Xuemei, Wen, Yun, Wang, Qin, Zhao, Xin, and Li, Xiying
- Subjects
- *
NONVERBAL cues , *POINTING (Gesture) , *FACIAL expression , *INSTRUCTIONAL films , *LEARNING - Abstract
Given their easy accessibility and dual‐channel model of content presentation, instructional videos have become a favoured tool for EFL vocabulary learning tool among many students. Teachers often use various nonverbal behaviours to elicit social reactions and guide learners' attention in instructional videos. The current study conducted three eye‐tracking experiments to examine the circumstances under which a teacher's happy facial expressions are beneficial in instructional videos, with or without pointing gestures and mouse pointing. Experiments 1 and 2 demonstrated that the combination of happy facial expressions and pointing gestures attracted learners' attention to the teacher and hindered students' learning performance, regardless of the complexity of slides. Experiment 3 showed that in instructional videos with complex slides, using happy facial expressions along with mouse pointing can enhance students' learning performance. Teachers are advised to show happy facial expressions and avoid using pointing gestures when designing instructional videos. Practitioner notes What is already known about this topic Given easy accessibility and dual‐channel model of content presentation, instructional videos have become a favoured tool for EFL vocabulary learning. When teachers record instructional videos while standing alongside slides, they often use nonverbal cues to support their speech. Teachers' social and attentional cues interactively influence students' learning processes and performance. What this paper adds A teacher's happy facial expressions evoke more positive emotions and greater motivation in learners compared to bored expressions. A teacher's pointing gestures, when combined with happy facial expressions, divert students' attention away from slides and towards the teacher. A teacher's happy facial expressions enhance students' learning performance when no pointing gestures are used in videos with simple slides. Implications for practice/policy Teachers are advised to display happy facial expressions and avoid using pointing gestures in instructional videos, regardless of the complexity of the slides. Practitioners should consider how to incorporate teachers' facial expressions pointing gestures and mouse pointing effectively. What is already known about this topic Given easy accessibility and dual‐channel model of content presentation, instructional videos have become a favoured tool for EFL vocabulary learning. When teachers record instructional videos while standing alongside slides, they often use nonverbal cues to support their speech. Teachers' social and attentional cues interactively influence students' learning processes and performance. What this paper adds A teacher's happy facial expressions evoke more positive emotions and greater motivation in learners compared to bored expressions. A teacher's pointing gestures, when combined with happy facial expressions, divert students' attention away from slides and towards the teacher. A teacher's happy facial expressions enhance students' learning performance when no pointing gestures are used in videos with simple slides. Implications for practice/policy Teachers are advised to display happy facial expressions and avoid using pointing gestures in instructional videos, regardless of the complexity of the slides. Practitioners should consider how to incorporate teachers' facial expressions pointing gestures and mouse pointing effectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Real-Time Analysis of Facial Expressions for Mood Estimation.
- Author
-
Filippini, Juan Sebastián, Varona, Javier, and Manresa-Yee, Cristina
- Subjects
FACIAL expression ,COMPUTER vision ,PLEASURE ,ANXIETY ,AFFECT (Psychology) - Abstract
This paper proposes a model-based method for real-time automatic mood estimation in video sequences. The approach is customized by learning the person's specific facial parameters, which are transformed into facial Action Units (AUs). A model mapping for mood representation is used to describe moods in terms of the PAD space: Pleasure, Arousal, and Dominance. From the intersection of these dimensions, eight octants represent fundamental mood categories. In the experimental evaluation, a stimulus video randomly selected from a set prepared to elicit different moods was played to participants, while the participant's facial expressions were recorded. From the experiment, Dominance is the dimension least impacted by facial expression, and this dimension could be eliminated from mood categorization. Then, four categories corresponding to the quadrants of the Pleasure–Arousal (PA) plane, "Exalted", "Calm", "Anxious" and "Bored", were defined, with two more categories for the "Positive" and "Negative" signs of the Pleasure (P) dimension. Results showed a 73% of coincidence in the PA categorization and a 94% in the P dimension, demonstrating that facial expressions can be used to estimate moods, within these defined categories, and provide cues for assessing users' subjective states in real-world applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Application of Stereo Digital Image Correlation on Facial Expressions Sensing.
- Author
-
Cheng, Xuanshi, Wang, Shibin, Wei, Huixin, Sun, Xin, Xin, Lipan, Li, Linan, Li, Chuanwei, and Wang, Zhiyong
- Subjects
DIGITAL image correlation ,FACIAL expression ,STEREO image ,EMOTIONS ,DIGITAL images - Abstract
Facial expression is an important way to reflect human emotions and it represents a dynamic deformation process. Analyzing facial movements is an effective means of understanding expressions. However, there is currently a lack of methods capable of analyzing the dynamic details of full-field deformation in expressions. In this paper, in order to enable effective dynamic analysis of expressions, a classic optical measuring method called stereo digital image correlation (stereo-DIC or 3D-DIC) is employed to analyze the deformation fields of facial expressions. The forming processes of six basic facial expressions of certain experimental subjects are analyzed through the displacement and strain fields calculated by 3D-DIC. The displacement fields of each expression exhibit strong consistency with the action units (AUs) defined by the classical Facial Action Coding System (FACS). Moreover, it is shown that the gradient of the displacement, i.e., the strain fields, offers special advantages in characterizing facial expressions due to their localized nature, effectively sensing the nuanced dynamics of facial movements. By processing extensive data, this study demonstrates two featured regions in six basic expressions, one where deformation begins and the other where deformation is most severe. Based on these two regions, the temporal evolutions of the six basic expressions are discussed. The presented investigations demonstrate the superior performance of 3D-DIC in the quantitative analysis of facial expressions. The proposed analytical strategy might have potential value in objectively characterizing human expressions based on quantitative measurement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Enhancing Criminal Detection: A Multi-Step Approach for Live Location Tracking and Emotion Verification Using Facial Recognition Technology.
- Author
-
Mohammed, Yahya Abdulsattar
- Subjects
HUMAN facial recognition software ,CRIMINAL justice system ,CRIMINAL investigation ,LAW enforcement ,ARTIFICIAL intelligence ,JUSTICE - Abstract
This paper offers a thorough analysis of the state of deceit detection in criminal justice and law enforcement settings as of right now. The study, which synthesizes findings from multiple investigations, emphasizes the progress made as well as the ongoing difficulties in accurately distinguishing deception from truth. The limitations of conventional techniques like behavior analysis interviews and polygraph exams, the potential of alternative strategies like voice tone analysis and facial expression analysis, and the moral ramifications of using emotional AI systems for deception detection are some of the main subjects covered. The review highlights the necessity for ongoing interdisciplinary research efforts and ethical concerns through critical analysis and discussion, in order to progress the field of deception detection while maintaining justice and respect to human rights values. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. A method for recognizing facial expression intensity based on facial muscle variations
- Author
-
Zhang, Yukun, Fei, Zixiang, Li, Xia, Zhou, Wenju, and Fei, Minrui
- Published
- 2024
- Full Text
- View/download PDF
11. The face is central to primate multicomponent signals
- Author
-
Waller, Bridget M., Kavanagh, Eithne, Micheletta, Jerome, Clark, Peter R., and Whitehouse, Jamie
- Published
- 2024
- Full Text
- View/download PDF
12. Facial Expression Recognition Based on GSO Enhanced Deep Learning in IOT Environment.
- Author
-
AL-Abboodi, Rana H. and AL-Ani, Ayad A.
- Subjects
FACIAL expression ,CONVOLUTIONAL neural networks ,DEEP learning ,FEATURE extraction ,POWER resources ,HUMAN-computer interaction - Abstract
Facial expressions play an important role in human communication and integrating deep learning techniques into Internet of Things (IoT) scenarios enhances the understanding of this data, enabling applications in industries such as healthcare, security, and human-computer interaction. The existing methods suffer from lower accuracy and higher computational complexity compared to the proposed Deep Convolutional Neural Network using Galactic Swarm Optimization (DCNN-GSO) approach hinder their practical applicability in real-time image processing tasks. This paper proposes a comprehensive framework for facial expression analysis in IoT environments. The preprocessing phase uses a Gaussian filter to improve image quality and reduce noise. Feature extraction is performed using a spatial temporal interest point (STIP), which captures the spatial and temporal cue of facial expressions. The proposed method leverages Deep Convolutional Neural Networks (DCNN) to extract discriminative features from facial images captured by IoT devices. Galactic Swarm Optimization (GSO) is employed to optimize the hyperparameters of the DCNN model, thereby improving its performance in facial expression classification tasks. By integrating GSO with deep learning, the proposed approach aims to overcome the challenges of limited computational resources and energy constraints inherent in IoT environments. GSO optimizes the parameters of deep learning models for facial expression recognition, improving accuracy and robustness. The proposed framework provides a comprehensive approach for facial expression analysis in IoT environments, solving challenges such as noise, computational complexity, accuracy, etc. in IoT systems, and opening the way for humans and improved device performance and connectivity. The DCNN-GSO method outperforms the competition with remarkable results: 94% accuracy, 92.3% precision, and 91% recall. With a very low mean absolute error (MAE) of 3.46, it shows itself to be a reliable and accurate solution for practical uses. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Automatic facial expression recognition under partial occlusion based on motion reconstruction using a denoising autoencoder.
- Author
-
Kemmou, Abdelaali, El Makrani, Adil, El Azami, Ikram, and Aabidi, Moulay Hafid
- Subjects
FACIAL expression ,RESEARCH personnel ,ROAD safety measures ,MOTION - Abstract
Automatic facial expression recognition (FER) plays a valuable role in various fields, including health, road safety, and marketing, where providing feedback on the user's condition is crucial. While significant progress has been made in controlled environments (such as frontal, unconcluded, and well-lit conditions), recognizing facial expressions in unconstrained environments (natural settings) remains challenging. The presence of occlusions poses a particular difficulty as they obscure parts of the facial information captured in the image. To address this issue, researchers have proposed different solutions, broadly categorized into two approaches: those focusing on visible regions of the face and those attempting to reconstruct hidden parts. Currently, most solutions rely on texture or geometry-based methods, with only a few utilizing motion-based approaches. However, incorporating motion appears to be particularly promising in adapting to occlusions due to its unique characteristics, such as close-range propagation and local coherence. In this paper, our focus lies on leveraging motion to overcome the challenges posed by occlusions in FER tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Multimodal Emotion Recognition Based on Facial Expressions, Speech, and EEG
- Author
-
Jiahui Pan, Weijie Fang, Zhihang Zhang, Bingzhi Chen, Zheng Zhang, and Shuihua Wang
- Subjects
Multimodal emotion recognition ,electroencephalogram ,facial expressions ,speech ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Medical technology ,R855-855.5 - Abstract
Goal: As an essential human-machine interactive task, emotion recognition has become an emerging area over the decades. Although previous attempts to classify emotions have achieved high performance, several challenges remain open: 1) How to effectively recognize emotions using different modalities remains challenging. 2) Due to the increasing amount of computing power required for deep learning, how to provide real-time detection and improve the robustness of deep neural networks is important. Method: In this paper, we propose a deep learning-based multimodal emotion recognition (MER) called Deep-Emotion, which can adaptively integrate the most discriminating features from facial expressions, speech, and electroencephalogram (EEG) to improve the performance of the MER. Specifically, the proposed Deep-Emotion framework consists of three branches, i.e., the facial branch, speech branch, and EEG branch. Correspondingly, the facial branch uses the improved GhostNet neural network proposed in this paper for feature extraction, which effectively alleviates the overfitting phenomenon in the training process and improves the classification accuracy compared with the original GhostNet network. For work on the speech branch, this paper proposes a lightweight fully convolutional neural network (LFCNN) for the efficient extraction of speech emotion features. Regarding the study of EEG branches, we proposed a tree-like LSTM (tLSTM) model capable of fusing multi-stage features for EEG emotion feature extraction. Finally, we adopted the strategy of decision-level fusion to integrate the recognition results of the above three modes, resulting in more comprehensive and accurate performance. Result and Conclusions: Extensive experiments on the CK+, EMO-DB, and MAHNOB-HCI datasets have demonstrated the advanced nature of the Deep-Emotion method proposed in this paper, as well as the feasibility and superiority of the MER approach.
- Published
- 2024
- Full Text
- View/download PDF
15. Associations between facial expressions and observational pain in residents with dementia and chronic pain.
- Author
-
Pu, Lihui, Coppieters, Michel W., Smalbrugge, Martin, Jones, Cindy, Byrnes, Joshua, Todorovic, Michael, and Moyle, Wendy
- Subjects
CHRONIC pain & psychology ,NURSING home patients ,PAIN measurement ,MOBILE apps ,SECONDARY analysis ,RESEARCH funding ,DESCRIPTIVE statistics ,BODY language ,ROBOTICS ,DEMENTIA ,DATA analysis software ,PSYCHOSOCIAL factors ,FACIAL expression ,REGRESSION analysis ,DEMENTIA patients - Abstract
Aim: To identify specific facial expressions associated with pain behaviors using the PainChek application in residents with dementia. Design: This is a secondary analysis from a study exploring the feasibility of PainChek to evaluate the effectiveness of a social robot (PARO) intervention on pain for residents with dementia from June to November 2021. Methods: Participants experienced PARO individually five days per week for 15 min (once or twice) per day for three consecutive weeks. The PainChek app assessed each resident's pain levels before and after each session. The association between nine facial expressions and the adjusted PainChek scores was analyzed using a linear mixed model. Results: A total of 1820 assessments were completed with 46 residents. Six facial expressions were significantly associated with a higher adjusted PainChek score. Horizontal mouth stretch showed the strongest association with the score, followed by brow lowering parting lips, wrinkling of the nose, raising of the upper lip and closing eyes. However, the presence of cheek raising, tightening of eyelids and pulling at the corner lip were not significantly associated with the score. Limitations of using the PainChek app were identified. Conclusion: Six specific facial expressions were associated with observational pain scores in residents with dementia. Results indicate that automated real‐time facial analysis is a promising approach to assessing pain in people with dementia. However, it requires further validation by human observers before it can be used for decision‐making in clinical practice. Impact: Pain is common in people with dementia, while assessing pain is challenging in this group. This study generated new evidence of facial expressions of pain in residents with dementia. Results will inform the development of valid artificial intelligence‐based algorithms that will support healthcare professionals in identifying pain in people with dementia in clinical situations. Reporting Method: The study adheres to the CONSORT reporting guidelines. Patient or Public Contribution: One resident with dementia and two family members of people with dementia were consulted and involved in the study design, where they provided advice on the protocol, information sheets and consent forms, and offered valuable insights to ensure research quality and relevance. Trial Registration: Australian and New Zealand Clinical Trials Registry number (ACTRN12621000837820). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. The power of facial expressions in branding: can emojis versus human faces shape emotional contagion and brand fun?
- Author
-
Almeida, Pedro, Rita, Paulo, Pinto, Diego Costa, and Herter, Márcia
- Subjects
EMOTIONAL contagion ,FACIAL expression ,BRANDING (Marketing) ,EMOTICONS & emojis ,EYE tracking - Abstract
Despite the growing importance of facial expressions in online brand communications, little is known about the positive and negative effects of replacing human facial expressions with emojis. To address this gap, this research examines how facial expressions (emojis versus human faces) shape consumers' emotional contagion and brand fun. Findings from three experimental studies (two online and one with eye-tracking) demonstrate that the presence of emojis increases brand fun due to the underlying mechanism of emotional contagion. However, although emojis might foster positive brand outcomes, they reduce credibility compared to brand communications using human faces. Finally, this research provides relevant managerial implications for brands that wish to create communications using facial expressions since emojis can positively impact product engagement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Facial and Body Posture Emotion Identification in Deaf and Hard-of-Hearing Young Adults
- Author
-
Blose, Brittany A. and Schenkel, Lindsay S.
- Published
- 2024
- Full Text
- View/download PDF
18. Human emotion recognition by analyzing facial expressions, heart rate and blogs using deep learning method
- Author
-
Ghosh, Rajib and Sinha, Ditipriya
- Published
- 2024
- Full Text
- View/download PDF
19. Delineating emotional differences between depressed and non-depressed individuals using a novel multimodal framework
- Author
-
Gill, Rupali, Singh, Jaiteg, Hooda, Susheela, and Srivastava, Durgesh
- Published
- 2024
- Full Text
- View/download PDF
20. Enhancing learner affective engagement: The impact of instructor emotional expressions and vocal charisma in asynchronous video-based online learning
- Author
-
Suen, Hung-Yue and Hung, Kuo-En
- Published
- 2024
- Full Text
- View/download PDF
21. The Effects of Cognitive Bias Modification on Hostile Interpretation Bias and Aggressive Behavior: A Systematic Review and Meta-analysis
- Author
-
AlMoghrabi, Nouran, Verhoef, Rogier E. J., Smeijers, Danique, Huijding, Jorg, and van Dijk, Anouk
- Published
- 2024
- Full Text
- View/download PDF
22. Emotion recognition to support personalized therapy in the elderly: an exploratory study based on CNNs
- Author
-
Torcate, Arianne Sarmento, de Santana, Maíra Araújo, and dos Santos, Wellington Pinheiro
- Published
- 2024
- Full Text
- View/download PDF
23. Facial Emotion Recognition of Mentally Retarded Children to Aid Psychotherapist
- Author
-
Srinivasan, R., Swathika, R., Radha, N., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Senjyu, Tomonobu, editor, So–In, Chakchai, editor, and Joshi, Amit, editor
- Published
- 2024
- Full Text
- View/download PDF
24. Emotion Detection Using Machine Learning Technique
- Author
-
Shukla, Samiksha, Lucas, Yash, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Shukla, Samiksha, editor, Sayama, Hiroki, editor, Kureethara, Joseph Varghese, editor, and Mishra, Durgesh Kumar, editor
- Published
- 2024
- Full Text
- View/download PDF
25. Emotion Recognition Through Facial Expressions from Images Using Deep Learning Techniques
- Author
-
Ansy, S. N., Bilal, E. Ahmed, Neethu, M. S., Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Nanda, Satyasai Jagannath, editor, Yadav, Rajendra Prasad, editor, Gandomi, Amir H., editor, and Saraswat, Mukesh, editor
- Published
- 2024
- Full Text
- View/download PDF
26. An Investigation of Video Vision Transformers for Depression Severity Estimation from Facial Video Data
- Author
-
Bargshady, Ghazal, Goecke, Roland, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Yan, Wei Qi, editor, Nguyen, Minh, editor, Nand, Parma, editor, and Li, Xuejun, editor
- Published
- 2024
- Full Text
- View/download PDF
27. Attributions of Trust and Trustworthiness.
- Author
-
Wilson, Rick K. and Eckel, Catherine C.
- Subjects
TRUST ,GENDER stereotypes ,STEREOTYPES ,HUMAN skin color ,FACIAL expression - Abstract
This study examines whether individuals can accurately predict trust and trustworthiness in others based on their appearance. Using photos and decisions from previous experimental trust games, subjects were asked to view the photos and guess the levels of trust and trustworthiness of the individuals depicted. The results show that subjects had little ability to accurately guess the trust and trustworthiness behavior of others. There is significant heterogeneity in the accuracy of guesses, and errors in guesses are systematically related to the observable characteristics of the photos. Subjects' guesses appear to be influenced by stereotypes based on the features seen in the photos, such as gender, skin color, or attractiveness. These findings suggest that individuals' beliefs that they can infer trust and trustworthiness from appearance are unfounded, and that efforts to reduce the impact of stereotypes on inferred trustworthiness may improve the efficiency of trust-based interactions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Portable Facial Expression System Based on EMG Sensors and Machine Learning Models.
- Author
-
Sanipatín-Díaz, Paola A., Rosero-Montalvo, Paul D., and Hernandez, Wilmar
- Subjects
MACHINE learning ,FACIAL expression ,DEEP learning ,EMOTIONS ,COMPUTER vision ,DETECTORS - Abstract
One of the biggest challenges of computers is collecting data from human behavior, such as interpreting human emotions. Traditionally, this process is carried out by computer vision or multichannel electroencephalograms. However, they comprise heavy computational resources, far from final users or where the dataset was made. On the other side, sensors can capture muscle reactions and respond on the spot, preserving information locally without using robust computers. Therefore, the research subject is the recognition of the six primary human emotions using electromyography sensors in a portable device. They are placed on specific facial muscles to detect happiness, anger, surprise, fear, sadness, and disgust. The experimental results showed that when working with the CortexM0 microcontroller, enough computational capabilities were achieved to store a deep learning model with a classification store of 92%. Furthermore, we demonstrate the necessity of collecting data from natural environments and how they need to be processed by a machine learning pipeline. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. FER-BHARAT: a lightweight deep learning network for efficient unimodal facial emotion recognition in Indian context.
- Author
-
Karani, Ruhina, Jani, Jay, and Desai, Sharmishta
- Subjects
EMOTION recognition ,DEEP learning ,CONVOLUTIONAL neural networks ,AFFECTIVE computing ,ARTIFICIAL intelligence ,FEATURE extraction ,DATABASES - Abstract
Humans' ability to manage their emotions has a big impact on their ability to plan and make decisions. In order to better understand people and improve human–machine interaction, researchers in affective computing and artificial intelligence are investigating the detection and recognition of emotions. However, different cultures have distinct ways of expressing emotions, and the existing emotion recognition datasets and models may not effectively capture the nuances of the Indian population. To address this gap, this study proposes custom-built lightweight Convolutional Neural Network (CNN) models that are optimized for accuracy and computational efficiency. These models are trained and evaluated on two Indian emotion datasets: The Indian Spontaneous Expression Dataset (ISED) and the Indian Semi Acted Facial Expression Database (iSAFE). The proposed CNN model with manual feature extraction provides remarkable accuracy improvement of 11.14% for ISED and 4.72% for iSAFE datasets as compared to baseline, while reducing the training time. The proposed model also surpasses the accuracy produced by pre-trained ResNet-50 model by 0.27% ISED and by 0.24% for the iSAFE dataset with significant improvement in training time of approximately 320 s for ISED and 60 s for iSAFE dataset. The suggested lightweight CNN model with manual feature extraction offers the advantage of being computationally efficient and more accurate compared to pre-trained model making it a more practical and efficient solution for emotion recognition among Indians. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Optimization of Wheelchair Control via Multi-Modal Integration: Combining Webcam and EEG.
- Author
-
Zaway, Lassaad, Ben Amor, Nader, Ktari, Jalel, Jallouli, Mohamed, Chrifi Alaoui, Larbi, and Delahoche, Laurent
- Subjects
ELECTRIC wheelchairs ,ELECTROENCEPHALOGRAPHY ,WHEELCHAIRS ,COMMAND & control systems ,CONVOLUTIONAL neural networks ,SIGNAL processing - Abstract
Even though Electric Powered Wheelchairs (EPWs) are a useful tool for meeting the needs of people with disabilities, some disabled people find it difficult to use regular EPWs that are joystick-controlled. Smart wheelchairs that use Brain–Computer Interface (BCI) technology present an efficient solution to this problem. This article presents a cutting-edge intelligent control wheelchair that is intended to improve user involvement and security. The suggested method combines facial expression analysis via a camera with EEG signal processing using the EMOTIV Insight EEG dataset. The system generates control commands by identifying specific EEG patterns linked to facial expressions such as eye blinking, winking left and right, and smiling. Simultaneously, the system uses computer vision algorithms and inertial measurements to analyze gaze direction in order to establish the user's intended steering. The outcomes of the experiments prove that the proposed system is reliable and efficient in meeting the various requirements of people, presenting a positive development in the field of smart wheelchair technology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. QoE Estimation of WebRTC-based Audio-visual Conversations from Facial and Speech Features.
- Author
-
Bingöl, Gülnaziye, Porcu, Simone, Floris, Alessandro, and Atzori, Luigi
- Subjects
SPEECH ,MULTISENSOR data fusion ,FACIAL expression ,VIDEOCONFERENCING ,DEAF children - Abstract
The utilization of user's facial- and speech-related features for the estimation of the Quality of Experience (QoE) of multimedia services is still underinvestigated despite its potential. Currently, only the use of either facial or speech features individually has been proposed, and relevant limited experiments have been performed. To advance in this respect, in this study, we focused on WebRTC-based videoconferencing, where it is often possible to capture both the facial expressions and vocal speech characteristics of the users. First, we performed thorough statistical analysis to identify the most significant facial- and speech-related features for QoE estimation, which we extracted from the participants' audio-video data collected during a subjective assessment. Second, we trained individual QoE estimation machine learning-based models on the separated facial and speech datasets. Finally, we employed data fusion techniques to combine the facial and speech datasets into a single dataset to enhance the QoE estimation performance due to the integrated knowledge provided by the fusion of facial and speech features. The obtained results demonstrate that the data fusion technique based on the Improved Centered Kernel Alignment (ICKA) allows for reaching a mean QoE estimation accuracy of 0.93, whereas the values of 0.78 and 0.86 are reached when using only facial or speech features, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. The Predictive Role of the Posterior Cerebellum in the Processing of Dynamic Emotions.
- Author
-
Malatesta, Gianluca, D'Anselmo, Anita, Prete, Giulia, Lucafò, Chiara, Faieta, Letizia, and Tommasi, Luca
- Subjects
CEREBELLUM ,FACIAL expression & emotions (Psychology) ,EMOTIONS ,FACIAL expression ,SOCIAL adjustment ,EMOTIONAL conditioning - Abstract
Recent studies have bolstered the important role of the cerebellum in high-level socio-affective functions. In particular, neuroscientific evidence shows that the posterior cerebellum is involved in social cognition and emotion processing, presumably through its involvement in temporal processing and in predicting the outcomes of social sequences. We used cerebellar transcranial random noise stimulation (ctRNS) targeting the posterior cerebellum to affect the performance of 32 healthy participants during an emotion discrimination task, including both static and dynamic facial expressions (i.e., transitioning from a static neutral image to a happy/sad emotion). ctRNS, compared to the sham condition, significantly reduced the participants' accuracy to discriminate static sad facial expressions, but it increased participants' accuracy to discriminate dynamic sad facial expressions. No effects emerged with happy faces. These findings may suggest the existence of two different circuits in the posterior cerebellum for the processing of negative emotional stimuli: a first-time-independent mechanism which can be selectively disrupted by ctRNS, and a second time-dependent mechanism of predictive "sequence detection" which can be selectively enhanced by ctRNS. This latter mechanism might be included among the cerebellar operational models constantly engaged in the rapid adjustment of social predictions based on dynamic behavioral information inherent to others' actions. We speculate that it might be one of the basic principles underlying the understanding of other individuals' social and emotional behaviors during interactions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. The Relationship between Psychopathic Traits and Facial Emotion Recognition in a Naturalistic Photo Set
- Author
-
Remmel, Rheanna J., Glenn, Andrea L., and Attya, Rachel L.
- Published
- 2024
- Full Text
- View/download PDF
34. Machine learning for human emotion recognition: a comprehensive review
- Author
-
Younis, Eman M. G., Mohsen, Someya, Houssein, Essam H., and Ibrahim, Osman Ali Sadek
- Published
- 2024
- Full Text
- View/download PDF
35. Instructors’ pointing gestures and positive facial expressions hinder learning in video lectures: Insights from teachers and students in China
- Author
-
Pi, Zhongling, Ling, Hongjuan, Li, Xiying, and Wang, Qin
- Published
- 2024
- Full Text
- View/download PDF
36. Inconsistent effects of components as evidence for non-compositionality in chimpanzee face-gesture combinations? A response to Oña et al (2019).
- Author
-
Cauté, Maxime, Chemla, Emmanuel, and Schlenker, Philippe
- Subjects
CHIMPANZEES ,FACIAL expression ,GESTURE - Abstract
Using field observations from a sanctuary, Oña and colleagues (DOI: 10.7717/peerj. 7623) investigated the semantics of face-gesture combinations in chimpanzees (Pan troglodytes). The response of the animals to these signals was encoded as a binary measure: positive interactions such as approaching or grooming were considered affiliative; ignoring or attacking was considered non-affiliative. The relevant signals are illustrated in Fig. 1 (https://doi.org/10.7717/peerj.7623/fig-1), together with the outcome in terms of average affiliativeness. The authors observe that there seems to be no systematicity in the way the faces modify the responses to the gestures, sometimes reducing affiliativeness, sometimes increasing it. A strong interpretation of this result would be that the meaning of a gesture-face combination cannot be derived from the meaning of the gesture and the meaning of the face, that is, the interpretation of chimpanzees' face-gesture combinations are non compositional in nature. We will revisit this conclusion: we will exhibit simple compositional systems which, after all, may be plausible. At the methodological level, we argue that it is critical to lay out the theoretical options explicitly for a complete comparison of their pros and cons. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Who gets caught by the emotion? Attentional biases toward emotional facial expressions and their link to social anxiety and autistic traits.
- Author
-
Folz, Julia, Roth, Tom S., Nikolić, Milica, and Kret, Mariska E.
- Subjects
ATTENTIONAL bias ,SOCIAL anxiety ,FACIAL expression ,SELF-expression ,EMOTIONS ,AUTISM spectrum disorders - Abstract
The emotional facial expressions of other individuals are a valuable information source in adapting behaviour to situational demands, and have been found to receive prioritized attention. Yet, enhanced attentional biases, such as a bias to social threat in Social Anxiety Disorder (SAD), or blunted attention to emotional information, as assumed in Autism Spectrum Disorder (ASD), can easily become maladaptive in daily life. In order to investigate individual differences in attentional biases toward different emotional expressions (angry, happy, sad, and fearful versus neutral) and their links to social anxiety and autistic traits, we tested 104 healthy participants with an emotional dot-probe paradigm on a touch screen, and measured clinical trait levels associated with ASD and SAD. While confirming the presence of attentional biases toward all emotional expressions, we did not find robust evidence for systematic links between these biases and either clinical trait dimension. Only an exploratory Bayesian analysis pointed to a less pronounced bias towards happy facial expressions with higher autistic trait levels. Moreover, a closer examination of the attentional bias towards angry facial expressions suggested that alterations in this bias might depend on a complex interplay between both trait dimensions. Novel approaches in the assessment of attentional biases might yield the potential to describe disorder-specific biases in attention to emotions more validly. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Do masks cover more than just a face? A study on how facemasks affect the perception of emotional expressions according to their degree of intensity.
- Author
-
Thomas, Pauline J. N. and Caharel, Stéphanie
- Abstract
Emotional facial expressions convey crucial information in nonverbal communication and serve as a mediator in face-to-face relationships. Their recognition would rely on specific facial traits depending on the perceived emotion. During the COVID-19 pandemic, wearing a facemask has thus disrupted the human ability to read emotions from faces. Yet, these effects are usually assessed across studies from faces expressing stereotypical and exaggerated emotions, which is far removed from real-life conditions. The objective of the present study was to evaluate the impact of facemasks through an emotion categorization task using morphs ranging from a neutral face and an expressive face (anger, disgust, fear, happiness, and sadness) (from 0% neutral to 100% expressive in 20% steps). Our results revealed a strong impact of facemasks on the recognition of expressions of disgust, happiness, and sadness, resulting in a decrease in performance and an increase in misinterpretations, both for low and high levels of intensity. In contrast, the recognition of anger and fear, as well as neutral expression, was found to be less impacted by mask-wearing. Future studies should address this issue from a more ecological point of view with the aim of taking concrete adaptive measures in the context of daily interactions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. The Story behind the Mask: A Narrative Review on Hypomimia in Parkinson's Disease.
- Author
-
Bianchini, Edoardo, Rinaldi, Domiziana, Alborghetti, Marika, Simonelli, Marta, D'Audino, Flavia, Onelli, Camilla, Pegolo, Elena, and Pontieri, Francesco E.
- Subjects
PARKINSON'S disease ,FACIAL expression & emotions (Psychology) ,EMOTION recognition ,FACIAL expression ,SELF-expression ,SYMPTOMS - Abstract
Facial movements are crucial for social and emotional interaction and well-being. Reduced facial expressions (i.e., hypomimia) is a common feature in patients with Parkinson's disease (PD) and previous studies linked this manifestation to both motor symptoms of the disease and altered emotion recognition and processing. Nevertheless, research on facial motor impairment in PD has been rather scarce and only a limited number of clinical evaluation tools are available, often suffering from poor validation processes and high inter- and intra-rater variability. In recent years, the availability of technology-enhanced quantification methods of facial movements, such as automated video analysis and machine learning application, led to increasing interest in studying hypomimia in PD. In this narrative review, we summarize the current knowledge on pathophysiological hypotheses at the basis of hypomimia in PD, with particular focus on the association between reduced facial expressions and emotional processing and analyze the current evaluation tools and management strategies for this symptom, as well as future research perspectives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. The Effect of Synchrony of Happiness on Facial Expression of Negative Emotion When Lying
- Author
-
Solbu, Anne, Frank, Mark G., Xu, Fei, Nwogu, Ifeoma, and Neurohr, Madison
- Published
- 2024
- Full Text
- View/download PDF
41. AutoMEDSys: automatic facial Micro-Expression Detection System using random Fourier Features based Neural Network
- Author
-
Yadav, Rahul, Priyanka, and Kacker, Priyanka
- Published
- 2024
- Full Text
- View/download PDF
42. Using facial expressions instead of response keys in the implicit association test
- Author
-
Bar-Anan, Yoav and Hershman, Ronen
- Published
- 2024
- Full Text
- View/download PDF
43. Real-Time Analysis of Facial Expressions for Mood Estimation
- Author
-
Juan Sebastián Filippini, Javier Varona, and Cristina Manresa-Yee
- Subjects
affective analysis ,mood ,facial expressions ,computer vision ,visual tracking ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
This paper proposes a model-based method for real-time automatic mood estimation in video sequences. The approach is customized by learning the person’s specific facial parameters, which are transformed into facial Action Units (AUs). A model mapping for mood representation is used to describe moods in terms of the PAD space: Pleasure, Arousal, and Dominance. From the intersection of these dimensions, eight octants represent fundamental mood categories. In the experimental evaluation, a stimulus video randomly selected from a set prepared to elicit different moods was played to participants, while the participant’s facial expressions were recorded. From the experiment, Dominance is the dimension least impacted by facial expression, and this dimension could be eliminated from mood categorization. Then, four categories corresponding to the quadrants of the Pleasure–Arousal (PA) plane, “Exalted”, “Calm”, “Anxious” and “Bored”, were defined, with two more categories for the “Positive” and “Negative” signs of the Pleasure (P) dimension. Results showed a 73% of coincidence in the PA categorization and a 94% in the P dimension, demonstrating that facial expressions can be used to estimate moods, within these defined categories, and provide cues for assessing users’ subjective states in real-world applications.
- Published
- 2024
- Full Text
- View/download PDF
44. Editorial: Machine learning approaches to recognize human emotions.
- Author
-
Valderrama, Camilo E., Gomes Ferreira, Marcelo Gitirana, Torres, Juan Manuel Mayor, Garcia-Ramirez, Alejandro Rafael, and Camorlinga, Sergio G.
- Subjects
EMOTIONS ,MACHINE learning ,EMOTION recognition ,AFFECTIVE computing ,CONVOLUTIONAL neural networks ,ARTIFICIAL intelligence - Published
- 2024
- Full Text
- View/download PDF
45. A Review of Automatic Pain Assessment from Facial Information Using Machine Learning
- Author
-
Najib Ben Aoun
- Subjects
automatic pain assessment ,pain intensity estimation ,facial information ,facial expressions ,machine learning ,deep earning ,Technology - Abstract
Pain assessment has become an important component in modern healthcare systems. It aids medical professionals in patient diagnosis and providing the appropriate care and therapy. Conventionally, patients are asked to provide their pain level verbally. However, this subjective method is generally inaccurate, not possible for non-communicative people, can be affected by physiological and environmental factors and is time-consuming, which renders it inefficient in healthcare settings. So, there has been a growing need to build objective, reliable and automatic pain assessment alternatives. In fact, due to the efficiency of facial expressions as pain biomarkers that accurately expand the pain intensity and the power of machine learning methods to effectively learn the subtle nuances of pain expressions and accurately predict pain intensity, automatic pain assessment methods have evolved rapidly. This paper reviews recent spatial facial expressions and machine learning-based pain assessment methods. Moreover, we highlight the pain intensity scales, datasets and method performance evaluation criteria. In addition, these methods’ contributions, strengths and limitations will be reported and discussed. Additionally, the review lays the groundwork for further study and improvement for more accurate automatic pain assessment.
- Published
- 2024
- Full Text
- View/download PDF
46. Application of Stereo Digital Image Correlation on Facial Expressions Sensing
- Author
-
Xuanshi Cheng, Shibin Wang, Huixin Wei, Xin Sun, Lipan Xin, Linan Li, Chuanwei Li, and Zhiyong Wang
- Subjects
facial expressions ,digital image correlation ,deformation ,dynamic analysis ,Chemical technology ,TP1-1185 - Abstract
Facial expression is an important way to reflect human emotions and it represents a dynamic deformation process. Analyzing facial movements is an effective means of understanding expressions. However, there is currently a lack of methods capable of analyzing the dynamic details of full-field deformation in expressions. In this paper, in order to enable effective dynamic analysis of expressions, a classic optical measuring method called stereo digital image correlation (stereo-DIC or 3D-DIC) is employed to analyze the deformation fields of facial expressions. The forming processes of six basic facial expressions of certain experimental subjects are analyzed through the displacement and strain fields calculated by 3D-DIC. The displacement fields of each expression exhibit strong consistency with the action units (AUs) defined by the classical Facial Action Coding System (FACS). Moreover, it is shown that the gradient of the displacement, i.e., the strain fields, offers special advantages in characterizing facial expressions due to their localized nature, effectively sensing the nuanced dynamics of facial movements. By processing extensive data, this study demonstrates two featured regions in six basic expressions, one where deformation begins and the other where deformation is most severe. Based on these two regions, the temporal evolutions of the six basic expressions are discussed. The presented investigations demonstrate the superior performance of 3D-DIC in the quantitative analysis of facial expressions. The proposed analytical strategy might have potential value in objectively characterizing human expressions based on quantitative measurement.
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.