Back to Search
Start Over
Estimating a 3D Human Skeleton from a Single RGB Image by Fusing Predicted Depths from Multiple Virtual Viewpoints.
- Source :
-
Sensors (Basel, Switzerland) [Sensors (Basel)] 2024 Dec 15; Vol. 24 (24). Date of Electronic Publication: 2024 Dec 15. - Publication Year :
- 2024
-
Abstract
- In computer vision, accurately estimating a 3D human skeleton from a single RGB image remains a challenging task. Inspired by the advantages of multi-view approaches, we propose a method of predicting enhanced 2D skeletons (specifically, predicting the joints' relative depths) from multiple virtual viewpoints based on a single real-view image. By fusing these virtual-viewpoint skeletons, we can then estimate the final 3D human skeleton more accurately. Our network consists of two stages. The first stage is composed of a two-stream network: the Real-Net stream predicts 2D image coordinates and the relative depth for each joint from the real viewpoint, while the Virtual-Net stream estimates the relative depths in virtual viewpoints for the same joints. Our network's second stage consists of a depth-denoising module, a cropped-to-original coordinate transform (COCT) module, and a fusion module. The goal of the fusion module is to fuse skeleton information from the real and virtual viewpoints so that it can undergo feature embedding, 2D-to-3D lifting, and regression to an accurate 3D skeleton. The experimental results demonstrate that our single-view method can achieve a performance of 45.7 mm on average per-joint position error, which is superior to that achieved in several other prior studies of the same kind and is comparable to that of other sequence-based methods that accept tens of consecutive frames as the input.
Details
- Language :
- English
- ISSN :
- 1424-8220
- Volume :
- 24
- Issue :
- 24
- Database :
- MEDLINE
- Journal :
- Sensors (Basel, Switzerland)
- Publication Type :
- Academic Journal
- Accession number :
- 39771753
- Full Text :
- https://doi.org/10.3390/s24248017