1. Optimizing photo-to-anime translation with prestyled paired datasets.
- Author
-
Chang, Chuan-Wang and Dharmawan, Pratamagusta
- Subjects
ARTIFICIAL intelligence ,JAPANESE films ,COGNITIVE styles ,IMAGE processing ,ANIME - Abstract
Animation is a widespread artistic expression that holds a special place in people's hearts. Traditionally, animation creation has relied heavily on manual techniques, demanding skilled drawing abilities and a significant amount of time. For instance, many Japanese anime films draw inspiration from real-world settings, requiring access to relevant references and artists capable of translating them into anime visuals. Consequently, the development of technology that automatically converts images into anime holds great significance. Numerous methods for style transfer have been developed using unsupervised learning and have achieved impressive results. However, unsupervised learning methods suffer when the image contains multiple styles within itself because they learn the style of the image globally. To solve this problem, we propose splitting these styles within the image into multiple classes: sky, buildings, greenery, water, and other objects. Then, we style these separated classes using existing image-to-image translation models. Finally, we train a pix2pix model to learn image style transfer in a paired manner. The experimental results show that the images are effectively styled into the resulting anime-styled image domain with comparable results to existing unsupervised learning GAN-based methods. The proposed method can effectively transfer the style from real-world photos into the anime-styled image domain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF