Back to Search Start Over

ObitoNet: Multimodal High-Resolution Point Cloud Reconstruction

Authors :
Thapliyal, Apoorv
Lanka, Vinay
Baskaran, Swathi
Publication Year :
2024

Abstract

ObitoNet employs a Cross Attention mechanism to integrate multimodal inputs, where Vision Transformers (ViT) extract semantic features from images and a point cloud tokenizer processes geometric information using Farthest Point Sampling (FPS) and K Nearest Neighbors (KNN) for spatial structure capture. The learned multimodal features are fed into a transformer-based decoder for high-resolution point cloud reconstruction. This approach leverages the complementary strengths of both modalities rich image features and precise geometric details ensuring robust point cloud generation even in challenging conditions such as sparse or noisy data.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.18775
Document Type :
Working Paper