Back to Search Start Over

SFSegNet: Parse Freehand Sketches using Deep Fully Convolutional Networks

Authors :
Jiang, Junkun
Wang, Ruomei
Lin, Shujin
Wang, Fei
Publication Year :
2019

Abstract

Parsing sketches via semantic segmentation is attractive but challenging, because (i) free-hand drawings are abstract with large variances in depicting objects due to different drawing styles and skills; (ii) distorting lines drawn on the touchpad make sketches more difficult to be recognized; (iii) the high-performance image segmentation via deep learning technologies needs enormous annotated sketch datasets during the training stage. In this paper, we propose a Sketch-target deep FCN Segmentation Network(SFSegNet) for automatic free-hand sketch segmentation, labeling each sketch in a single object with multiple parts. SFSegNet has an end-to-end network process between the input sketches and the segmentation results, composed of 2 parts: (i) a modified deep Fully Convolutional Network(FCN) using a reweighting strategy to ignore background pixels and classify which part each pixel belongs to; (ii) affine transform encoders that attempt to canonicalize the shaking strokes. We train our network with the dataset that consists of 10,000 annotated sketches, to find an extensively applicable model to segment stokes semantically in one ground truth. Extensive experiments are carried out and segmentation results show that our method outperforms other state-of-the-art networks.<br />Comment: Accepted for the 2019 International Joint Conference on Neural Networks (IJCNN-19)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1908.05389
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/IJCNN.2019.8851974