Back to Search Start Over

MOST-Net: A Memory Oriented Style Transfer Network for Face Sketch Synthesis

Authors :
Ji, Fan
Sun, Muyi
Qi, Xingqun
Li, Qi
Sun, Zhenan
Publication Year :
2022

Abstract

Face sketch synthesis has been widely used in multi-media entertainment and law enforcement. Despite the recent developments in deep neural networks, accurate and realistic face sketch synthesis is still a challenging task due to the diversity and complexity of human faces. Current image-to-image translation-based face sketch synthesis frequently encounters over-fitting problems when it comes to small-scale datasets. To tackle this problem, we present an end-to-end Memory Oriented Style Transfer Network (MOST-Net) for face sketch synthesis which can produce high-fidelity sketches with limited data. Specifically, an external self-supervised dynamic memory module is introduced to capture the domain alignment knowledge in the long term. In this way, our proposed model could obtain the domain-transfer ability by establishing the durable relationship between faces and corresponding sketches on the feature level. Furthermore, we design a novel Memory Refinement Loss (MR Loss) for feature alignment in the memory module, which enhances the accuracy of memory slots in an unsupervised manner. Extensive experiments on the CUFS and the CUFSF datasets show that our MOST-Net achieves state-of-the-art performance, especially in terms of the Structural Similarity Index(SSIM).<br />Comment: 7 pages, 4 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2202.03596
Document Type :
Working Paper