1. No Time to Train: Empowering Non-Parametric Networks for Few-shot 3D Scene Segmentation
- Author
-
Zhu, Xiangyang, Zhang, Renrui, He, Bowei, Guo, Ziyu, Liu, Jiaming, Xiao, Han, Fu, Chaoyou, Dong, Hao, Gao, Peng, Zhu, Xiangyang, Zhang, Renrui, He, Bowei, Guo, Ziyu, Liu, Jiaming, Xiao, Han, Fu, Chaoyou, Dong, Hao, and Gao, Peng
- Abstract
To reduce the reliance on large-scale datasets, recent works in 3D segmentation resort to few-shot learning. Current 3D few-shot segmentation methods first pre-train models on 'seen' classes, and then evaluate their generalization performance on 'unseen' classes. However, the prior pre-training stage not only introduces excessive time overhead but also incurs a significant domain gap on 'unseen' classes. To tackle these issues, we propose a Non-parametric Network for few-shot 3D Segmentation, Seg-NN, and its Parametric variant, Seg-PN. Without training, Seg-NN extracts dense representations by hand-crafted filters and achieves comparable performance to existing parametric models. Due to the elimination of pre-training, Seg-NN can alleviate the domain gap issue and save a substantial amount of time. Based on Seg-NN, Seg-PN only requires training a lightweight QUEry-Support Transferring (QUEST) module, which enhances the interaction between the support set and query set. Experiments suggest that Seg-PN outperforms previous state-of-the-art method by +4.19% and +7.71% mIoU on S3DIS and ScanNet datasets respectively, while reducing training time by -90%, indicating its effectiveness and efficiency., Comment: CVPR Highlight. Code is available at https://github.com/yangyangyang127/Seg-NN. arXiv admin note: text overlap with arXiv:2308.12961
- Published
- 2024