1. Face shape transfer via semantic warping
- Author
-
Zonglin Li, Xiaoqian Lv, Wei Yu, Qinglin Liu, Jingbo Lin, and Shengping Zhang
- Subjects
Computer vision ,Face editing ,Spatial transformer ,High-resolution face dataset ,Electronic computers. Computer science ,QA75.5-76.95 ,Neurophysiology and neuropsychology ,QP351-495 - Abstract
Abstract Face reshaping aims to adjust the shape of a face in a portrait image to make the face aesthetically beautiful, which has many potential applications. Existing methods 1) operate on the pre-defined facial landmarks, leading to artifacts and distortions due to the limited number of landmarks, 2) synthesize new faces based on segmentation masks or sketches, causing generated faces to look dissatisfied due to the losses of skin details and difficulties in dealing with hair and background blurring, and 3) project the positions of the deformed feature points from the 3D face model to the 2D image, making the results unrealistic because of the misalignment between feature points. In this paper, we propose a novel method named face shape transfer (FST) via semantic warping, which can transfer both the overall face and individual components (e.g., eyes, nose, and mouth) of a reference image to the source image. To achieve controllability at the component level, we introduce five encoding networks, which are designed to learn feature embedding specific to different face components. To effectively exploit the features obtained from semantic parsing maps at different scales, we employ a straightforward method of directly connecting all layers within the global dense network. This direct connection facilitates maximum information flow between layers, efficiently utilizing diverse scale semantic parsing information. To avoid deformation artifacts, we introduce a spatial transformer network, allowing the network to handle different types of semantic warping effectively. To facilitate extensive evaluation, we construct a large-scale high-resolution face dataset, which contains 14,000 images with a resolution of 1024 × 1024. Superior performance of our method is demonstrated by qualitative and quantitative experiments on the benchmark dataset.
- Published
- 2024
- Full Text
- View/download PDF