Back to Search Start Over

Leveraging facial expressions as emotional context in image captioning.

Authors :
Das, Riju
Wu, Nan
Dev, Soumyabrata
Source :
Multimedia Tools & Applications; Sep2024, Vol. 83 Issue 30, p75195-75216, 22p
Publication Year :
2024

Abstract

Image captioning has emerged as a prominent approach for generating verbal descriptions of images that humans can read and understand. Numerous techniques and models in this domain have predominantly focused on analyzing the factual elements present within an image, employing convolutional neural networks (CNN) and long short-term memory (LSTM) networks to generate captions. However, an inherent limitation of these existing approaches is their failure to consider the emotional aspects exhibited by the main subject within an image, thereby potentially leading to inaccuracies in reflecting the conveyed emotional content. Acknowledging this limitation, this paper endeavors to construct an improved model dedicated to extracting human emotions from images and seamlessly embedding emotional attributes into the accompanying captions. In our research, we employ the widely accessible benchmarking image captioning dataset, Flickr8k. Our ultimate objective is to establish a more appropriate and impactful model for images containing human faces that provide more accurate and impacting captions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
13807501
Volume :
83
Issue :
30
Database :
Complementary Index
Journal :
Multimedia Tools & Applications
Publication Type :
Academic Journal
Accession number :
179395149
Full Text :
https://doi.org/10.1007/s11042-023-17904-3