Back to Search
Start Over
Facial expression recognition system using multimodal sensors.
- Source :
- AIP Conference Proceedings; 2025, Vol. 3159 Issue 1, p1-5, 5p
- Publication Year :
- 2025
-
Abstract
- Recognizing people's emotions is incredibly crucial in interpersonal interactions. Automatic emotion identification hasbeen a significant study field for ages. Therefore, the extraction and interpretation of emotions are crucial components of human- computer communication interactions. Facial expression recognition (FER) may be used, among other things, to identify physiological illnesses. Controlled Facial Expression Recognition systems obtain extremely accurate results (about 97illumination fluctuations, head posture, and object dependencies reduce accuracy. To do. decrease to 50concentrate on 3 distinct sensors to increase the precision of facial expression identification both in the lab and in the field. The first group contains sophisticated face sensors that detect tiny dynamic changes in facial components. B. Use his tracker to help you distinguish between background noise and well-known faces. Non-visual sensors make up the second type. B. For varying lighting and changing locations, audio, EEG, and depth are used. A third sortof sensor is a target sensor. B. Infrared thermal sensors assist the FER system withstand changes in light by screening undesired visual information. This research utilises the (RAF-DB), which comprises around 30,000 face pictures in random positions and illuminations of several individuals of various ages and ethnicities. The biggest downside of this database is that it is controlled bythe sensors stated above. Deep Locality Preserving Convolutional Neural Networks (DLP-CNN) (DLP-CNN). We classified phrases into seven fundamental types. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 0094243X
- Volume :
- 3159
- Issue :
- 1
- Database :
- Complementary Index
- Journal :
- AIP Conference Proceedings
- Publication Type :
- Conference
- Accession number :
- 182161909
- Full Text :
- https://doi.org/10.1063/5.0247311