Back to Search
Start Over
A framework for real-time dress classification in cluttered background images for robust image retrieval.
- Source :
-
Cognition, Technology & Work . Nov2023, Vol. 25 Issue 4, p373-384. 12p. - Publication Year :
- 2023
-
Abstract
- The dress of a person can provide information like culture, status, gender, and personality. Dress classification can help us improve e-commerce, image-based search engines for the fashion industry, video surveillance, and social media image categorization. Deep learning is emerging as one of the most powerful classification techniques in many fields like medicine, business, and science. In the fashion industry, Convolutional Neural Networks have played a vital role in dress identification classification, but it is still a difficult task due to cluttered backgrounds, different poses, and lack of fashion datasets with rich classes and annotations. To complement the small-sized datasets, transfer learning is being widely used in training deep learning models. Current research applies transfer learning in two steps on the InceptionV3 model pre-trained on the ImageNet dataset. First, the pre-trained model is fine-tuned on DeepFashion dataset to transfer the domain of learned parameters toward fashion. In the second step, the model is fine-tuned on Pak Dataset a collection of Asian cultural fashion images having cluttered backgrounds. Experiments show the robustness and usefulness of two-step transfer learning in the classification of fashion images having cluttered backgrounds. Dress classification can be used for fashion image retrieval systems and recommendation systems. Dress classification can also be used in video surveillance systems for finding missing persons or crime suspects. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 14355558
- Volume :
- 25
- Issue :
- 4
- Database :
- Academic Search Index
- Journal :
- Cognition, Technology & Work
- Publication Type :
- Academic Journal
- Accession number :
- 173154249
- Full Text :
- https://doi.org/10.1007/s10111-023-00735-5