101. Flexible Fashion Product Retrieval Using Multimodality-Based Deep Learning
- Author
-
Yeonsik Jo, Jehyeon Wi, Minseok Kim, and Jae Yeol Lee
- Subjects
deep learning ,fashion product retrieval ,multimodality-based retrieval ,sustainable online shopping mall ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Typically, fashion product searching in online shopping malls uses meta-information of the product. However, the use of meta-information is not guaranteed to ensure customer satisfaction, because of inherent limitations on the inaccuracy of input meta-information, imbalance of categories, and misclassification of apparel images. These limitations prevent the shopping mall from providing a user-desired product retrieval. This paper proposes a new fashion product search method using multimodality-based deep learning, which can support more flexible and efficient retrieval by combining faceted queries and fashion image-based features. A deep convolutional neural network (CNN) generates a unique feature vector of the image, and the query input by the user is vectorized through a long short-term memory (LSTM)-based recurrent neural network (RNN). Then, the semantic similarity between the query vector and the product image vector is calculated to obtain the best match. Three different forms of the faceted query are supported. We perform quantitative and qualitative analyses to prove the effectiveness and originality of the proposed approach.
- Published
- 2020
- Full Text
- View/download PDF