Back to Search Start Over

RELI11D: A Comprehensive Multimodal Human Motion Dataset and Method

Authors :
Yan, Ming
Zhang, Yan
Cai, Shuqiang
Fan, Shuqi
Lin, Xincheng
Dai, Yudi
Shen, Siqi
Wen, Chenglu
Xu, Lan
Ma, Yuexin
Wang, Cheng
Publication Year :
2024

Abstract

Comprehensive capturing of human motions requires both accurate captures of complex poses and precise localization of the human within scenes. Most of the HPE datasets and methods primarily rely on RGB, LiDAR, or IMU data. However, solely using these modalities or a combination of them may not be adequate for HPE, particularly for complex and fast movements. For holistic human motion understanding, we present RELI11D, a high-quality multimodal human motion dataset involves LiDAR, IMU system, RGB camera, and Event camera. It records the motions of 10 actors performing 5 sports in 7 scenes, including 3.32 hours of synchronized LiDAR point clouds, IMU measurement data, RGB videos and Event steams. Through extensive experiments, we demonstrate that the RELI11D presents considerable challenges and opportunities as it contains many rapid and complex motions that require precise location. To address the challenge of integrating different modalities, we propose LEIR, a multimodal baseline that effectively utilizes LiDAR Point Cloud, Event stream, and RGB through our cross-attention fusion strategy. We show that LEIR exhibits promising results for rapid motions and daily motions and that utilizing the characteristics of multiple modalities can indeed improve HPE performance. Both the dataset and source code will be released publicly to the research community, fostering collaboration and enabling further exploration in this field.<br />Comment: CVPR2024, Project website: http://www.lidarhumanmotion.net/reli11d/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.19501
Document Type :
Working Paper