Back to Search
Start Over
Multi-View Saliency Guided Deep Neural Network for 3-D Object Retrieval and Classification.
- Source :
- IEEE Transactions on Multimedia; Jun2020, Vol. 22 Issue 6, p1496-1506, 11p
- Publication Year :
- 2020
-
Abstract
- In this paper, we propose the multi-view saliency guided deep neural network (MVSG-DNN) for 3D object retrieval and classification. This method mainly consists of three key modules. First, the module of model projection rendering is employed to capture the multiple views of one 3D object. Second, the module of visual context learning applies the basic Convolutional Neural Networks for visual feature extraction of individual views and then employs the saliency LSTM to adaptively select the representative views based on multi-view context. Finally, with these information, the module of multi-view representation learning can generate the compile 3D object descriptors with the designed classification LSTM for 3D object retrieval and classification. The proposed MVSG-DNN has two main contributions: 1) It can jointly realize the selection of representative views and the similarity measure by fully exploiting multi-view context; 2) It can discover the discriminative structure of multi-view sequence without constraints of specific camera settings. Consequently, it can support flexible 3D object retrieval and classification for real applications by avoiding the required camera settings. Extensive comparison experiments on ModelNet10, ModelNet40, and ShapeNetCore55 demonstrate the superiority of MVSG-DNN against the state-of-art methods. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 15209210
- Volume :
- 22
- Issue :
- 6
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Multimedia
- Publication Type :
- Academic Journal
- Accession number :
- 143456875
- Full Text :
- https://doi.org/10.1109/TMM.2019.2943740