Back to Search Start Over

Joint Geometric-Semantic Driven Character Line Drawing Generation

Authors :
Fang, Cheng-Yu
Han, Xian-Feng
Fang, Cheng-Yu
Han, Xian-Feng
Publication Year :
2022

Abstract

Character line drawing synthesis can be formulated as a special case of image-to-image translation problem that automatically manipulates the photo-to-line drawing style transformation. In this paper, we present the first generative adversarial network based end-to-end trainable translation architecture, dubbed P2LDGAN, for automatic generation of high-quality character drawings from input photos/images. The core component of our approach is the joint geometric-semantic driven generator, which uses our well-designed cross-scale dense skip connections framework to embed learned geometric and semantic information for generating delicate line drawings. In order to support the evaluation of our model, we release a new dataset including 1,532 well-matched pairs of freehand character line drawings as well as corresponding character images/photos, where these line drawings with diverse styles are manually drawn by skilled artists. Extensive experiments on our introduced dataset demonstrate the superior performance of our proposed models against the state-of-the-art approaches in terms of quantitative, qualitative and human evaluations. Our code, models and dataset will be available at Github.<br />Comment: Published in ICMR '23: Proceedings of the 2023 ACM International Conference on Multimedia Retrieval, June 2023

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1333776843
Document Type :
Electronic Resource
Full Text :
https://doi.org/10.1145.3591106.3592216