1. MS2Mesh-XR: Multi-modal Sketch-to-Mesh Generation in XR Environments
- Author
-
Tong, Yuqi, Qiu, Yue, Li, Ruiyang, Qiu, Shi, and Heng, Pheng-Ann
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Human-Computer Interaction ,Computer Science - Multimedia - Abstract
We present MS2Mesh-XR, a novel multi-modal sketch-to-mesh generation pipeline that enables users to create realistic 3D objects in extended reality (XR) environments using hand-drawn sketches assisted by voice inputs. In specific, users can intuitively sketch objects using natural hand movements in mid-air within a virtual environment. By integrating voice inputs, we devise ControlNet to infer realistic images based on the drawn sketches and interpreted text prompts. Users can then review and select their preferred image, which is subsequently reconstructed into a detailed 3D mesh using the Convolutional Reconstruction Model. In particular, our proposed pipeline can generate a high-quality 3D mesh in less than 20 seconds, allowing for immersive visualization and manipulation in run-time XR scenes. We demonstrate the practicability of our pipeline through two use cases in XR settings. By leveraging natural user inputs and cutting-edge generative AI capabilities, our approach can significantly facilitate XR-based creative production and enhance user experiences. Our code and demo will be available at: https://yueqiu0911.github.io/MS2Mesh-XR/, Comment: IEEE AIxVR 2025
- Published
- 2024