Back to Search Start Over

A multi-modal personality prediction system.

Authors :
Suman, Chanchal
Saha, Sriparna
Gupta, Aditya
Pandey, Saurabh Kumar
Bhattacharyya, Pushpak
Source :
Knowledge-Based Systems. Jan2022, Vol. 236, pN.PAG-N.PAG. 1p.
Publication Year :
2022

Abstract

The behavior, mental-health, emotion, life choices, social nature, and thought patterns of an individual are revealed by personality. Cyber forensics, personalized services, recommender systems are some of the examples of automatic personality prediction. A deep learning based personality prediction system has been developed in this work. Facial and ambient features are extracted from the visual modality using Multi-task Cascaded Convolutional Networks (MTCNN) and ResNet, respectively; the audio features are extracted using the VGGish Convolutional Neural Networks (VGGish CNN), and the text features are extracted using n-gram Convolutional Neural Networks (CNN). The extracted features are then passed to a fully connected layer followed by sigmoid for the final output prediction. Finally, the text, visual and audio modalities are combined in different ways: (i) concatenation of features in multi-modal setting, and (ii) application of different attention mechanisms for fusing features. The dataset released in Chalearn-17 is used for evaluating the performance of the system. From the obtained results, it can be concluded that, the concatenation of features extracted from different modalities attains comparable results with the averaging method (late fusion). It is also shown that a hand full of images are enough for attaining comparable performance. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09507051
Volume :
236
Database :
Academic Search Index
Journal :
Knowledge-Based Systems
Publication Type :
Academic Journal
Accession number :
154388344
Full Text :
https://doi.org/10.1016/j.knosys.2021.107715