Back to Search
Start Over
Character-based Subtitle Generation by Learning of Multimodal Concept Hierarchy from Cartoon Videos
- Source :
- Journal of KIISE. 42:451-458
- Publication Year :
- 2015
- Publisher :
- Korean Institute of Information Scientists and Engineers, 2015.
-
Abstract
- Previous multimodal learning methods focus on problem-solving aspects, such as image and video search and tagging, rather than on knowledge acquisition via content modeling. In this paper, we propose the Multimodal Concept Hierarchy (MuCH), which is a content modeling method that uses a cartoon video dataset and a character-based subtitle generation method from the learned model. The MuCH model has a multimodal hypernetwork layer, in which the patterns of the words and image patches are represented, and a concept layer, in which each concept variable is represented by a probability distribution of the words and the image patches. The model can learn the characteristics of the characters as concepts from the video subtitles and scene images by using a Bayesian learning method and can also generate character-based subtitles from the learned model if text queries are provided. As an experiment, the MuCH model learned concepts from 'Pororo' cartoon videos with a total of 268 minutes in length and generated character-based subtitles. Finally, we compare the results with those of other multimodal learning models. The Experimental results indicate that given the same text query, our model generates more accurate and more character-specific subtitles than other models.
- Subjects :
- Focus (computing)
Computer science
business.industry
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
Bayesian inference
computer.software_genre
Knowledge acquisition
Multimodal learning
Character (mathematics)
Probability distribution
Subtitle
Artificial intelligence
Layer (object-oriented design)
business
computer
Natural language processing
Subjects
Details
- ISSN :
- 2383630X
- Volume :
- 42
- Database :
- OpenAIRE
- Journal :
- Journal of KIISE
- Accession number :
- edsair.doi...........61f9b1a8984ce1ee3b4f602543e96427
- Full Text :
- https://doi.org/10.5626/jok.2015.42.4.451