1. Neural Radiance Fields Convert 2D to 3D Texture
- Author
-
Yang Wang, Chenghao Wang, Zichao Li, Zhuoyue Wang, Xinqi Liu, Yue Zhu, Yang Wang, Chenghao Wang, Zichao Li, Zhuoyue Wang, Xinqi Liu, and Yue Zhu
- Abstract
The objective of our project is to capture pictures or videos by surrounding a circle of objects, such as chairs, tables, cars, and more.[1]Utilizing advanced 3D reconstruction technology, we aim to generate 3D models of these captured objects. Post reconstruction, these 3D models can be edited through an intuitive interface, enabling users to apply different textures and make other modifications. This project has significant applications in various domains such as home decoration, vehicle customization, and beyond. For the 3D reconstruction in this project, we employed Nvidia's latest ngp-instant method, which leverages hash encoding for 3D graphics reconstruction. This technique offers a faster inference speed compared to traditional NeRF (Neural Radiance Fields). Following the 3D reconstruction, we apply volume rendering to visualize the 3D models. To facilitate user editability[2], we integrated an editable interface inspired by StyleGAN, utilizing a texture loss function to transform the 3D model into a customizable texture. This combination of technologies allows for a seamless and efficient process in creating and editing 3D models from 2D images.
- Published
- 2024