1. Improved Video-Based Point Cloud Compression via Segmentation
- Author
-
Faranak Tohidi, Manoranjan Paul, Anwaar Ulhaq, and Subrata Chakraborty
- Subjects
dynamic point cloud ,compression ,segmentation ,V-PCC ,3D video ,Chemical technology ,TP1-1185 - Abstract
A point cloud is a representation of objects or scenes utilising unordered points comprising 3D positions and attributes. The ability of point clouds to mimic natural forms has gained significant attention from diverse applied fields, such as virtual reality and augmented reality. However, the point cloud, especially those representing dynamic scenes or objects in motion, must be compressed efficiently due to its huge data volume. The latest video-based point cloud compression (V-PCC) standard for dynamic point clouds divides the 3D point cloud into many patches using computationally expensive normal estimation, segmentation, and refinement. The patches are projected onto a 2D plane to apply existing video coding techniques. This process often results in losing proximity information and some original points. This loss induces artefacts that adversely affect user perception. The proposed method segments dynamic point clouds based on shape similarity and occlusion before patch generation. This segmentation strategy helps maintain the points’ proximity and retain more original points by exploiting the density and occlusion of the points. The experimental results establish that the proposed method significantly outperforms the V-PCC standard and other relevant methods regarding rate–distortion performance and subjective quality testing for both geometric and texture data of several benchmark video sequences.
- Published
- 2024
- Full Text
- View/download PDF