Back to Search
Start Over
Dual feature fusion network: A dual feature fusion network for point cloud completion
- Source :
- IET Computer Vision, Vol 16, Iss 6, Pp 541-555 (2022)
- Publication Year :
- 2022
- Publisher :
- Wiley, 2022.
-
Abstract
- Abstract Point cloud data in the real world is often affected by occlusion and light reflection, leading to incompleteness of the data. Large‐region missing point clouds will cause great deviations in downstream tasks. A dual feature fusion network (DFF‐Net) is proposed to improve the accuracy of the completion of a large missing region of the point cloud. First, a dual feature encoder is designed to extract and fuse the global and local features of the input point cloud. Subsequently, a decoder is used to directly generate a point cloud of missing region that retains local details. In order to make the generated point cloud more detailed, a loss function with multiple terms is employed to emphasise the distribution density and visual quality of the generated point cloud. A large number of experiments show that the authors’ DFF‐Net is better than the previous state‐of‐the‐art methods in the aspect of point cloud completion.
Details
- Language :
- English
- ISSN :
- 17519640 and 17519632
- Volume :
- 16
- Issue :
- 6
- Database :
- Directory of Open Access Journals
- Journal :
- IET Computer Vision
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.823876549d06421fa3c098e576a4ee3e
- Document Type :
- article
- Full Text :
- https://doi.org/10.1049/cvi2.12111