Back to Search
Start Over
Cross-domain retrieving sketch and shape using cycle CNNs.
- Source :
-
Computers & Graphics . Jun2020, Vol. 89, p50-58. 9p. - Publication Year :
- 2020
-
Abstract
- • In this paper, we present a deep learning approach for cross-domain retrieving 3D shape and 2D sketch image. • We propose a new Cycle CNNs, which can construct the cross-domain mapping between the feature space of 2D sketches and the one of 3D shapes. • The core of our method is that the mapping relationship can be directly generated through the proposed Cycle CNNs, without explicit common feature space construction. • Our proposed Cycle CNNs works well on both sketch-based shape retrieval and shape-based sketch retrieval. In this paper, we present a deep learning approach for cross-domain retrieval of 3D shape and 2D sketch image. Cross-domain retrieval has received significant attention to flexibly find information across different modalities of data. Effective measuring the similarity between different modalities of data is the key of cross-domain retrieval. Different modalities such as shape and sketch have imbalanced and complementary relationships, which contain unequal amount of information when describing the same semantics. Existing methods based on deep learning networks mostly construct one common space for different modalities, and these nets usually loss exclusive modality-specific characteristics. To address this problem, we propose a novel Cycle CNNs to estimate the cross-domain mapping between the space of 3D shape descriptors and the one of 2D sketch features. First, we employ the existing networks to construct independent feature spaces for each modality. For each feature space, modality-specific properties within one modality are fully exploited. Next, we use the designed Cycle CNNs to learn the mapping function between different feature spaces. This network can capture the mapping relationship between 3D shape feature space and 2D sketch feature domain. Finally, we use the explored mapping between the feature spaces of different modalities to perform cross-domain retrieval. We demonstrate a variety of promising results, where our method achieves better retrieval accuracy than existing state-of-the-art approaches. [ABSTRACT FROM AUTHOR]
- Subjects :
- *DRAWING
*DEEP learning
*GEOMETRIC shapes
*NET losses
Subjects
Details
- Language :
- English
- ISSN :
- 00978493
- Volume :
- 89
- Database :
- Academic Search Index
- Journal :
- Computers & Graphics
- Publication Type :
- Academic Journal
- Accession number :
- 143742211
- Full Text :
- https://doi.org/10.1016/j.cag.2020.05.018