Back to Search Start Over

Style Transformation Method of Stage Background Images by Emotion Words of Lyrics

Authors :
Shuyu Li
Yunsick Sung
Hyewon Yoon
Source :
Mathematics, Vol 9, Iss 1831, p 1831 (2021), Mathematics, Volume 9, Issue 15
Publication Year :
2021
Publisher :
MDPI AG, 2021.

Abstract

Recently, with the development of computer technology, deep learning has expanded to the field of art, which requires creativity, which is a unique ability of humans, and an understanding of the human emotions expressed in art to process them as data. The field of art is integrating with various industrial fields, among which artificial intelligence (AI) is being used in stage art, to create visual images. As it is difficult for a computer to process emotions expressed in songs as data, existing stage background images for song performances are human designed. Recently, research has been conducted to enable AI to design stage background images on behalf of humans. However, there is no research on reflecting emotions contained in song lyrics to stage background images. This paper proposes a style transformation method to reflect emotions in stage background images. First, multiple verses and choruses are derived from song lyrics, one at a time, and emotion words included in each verse and chorus are extracted. Second, the probability distribution of the emotion words is calculated for each verse and chorus, and the image with the most similar probability distribution from an image dataset with emotion word tags in advance is selected for each verse and chorus. Finally, for each verse and chorus, the stage background images with the transferred style are outputted. Through an experiment, the similarity between the stage background and the image transferred to the style of the image with similar emotion words probability distribution was 38%, and the similarity between the stage background image and the image transferred to the style of the image with completely different emotion word probability distribution was 8%. The proposed method reduced the total variation loss of change from 1.0777 to 0.1597. The total variation loss is the sum of content loss and style loss based on weights. This shows that the style transferred image is close to edge information about the content of the input image, and the style is close to the target style image.

Details

Language :
English
ISSN :
22277390
Volume :
9
Issue :
1831
Database :
OpenAIRE
Journal :
Mathematics
Accession number :
edsair.doi.dedup.....036587c7c53b0a11425a4f7468e17d11