Back to Search Start Over

Semantic Flow: Learning Semantic Field of Dynamic Scenes from Monocular Videos

Authors :
Tian, Fengrui
Duan, Yueqi
Wang, Angtian
Guo, Jianfei
Du, Shaoyi
Publication Year :
2024

Abstract

In this work, we pioneer Semantic Flow, a neural semantic representation of dynamic scenes from monocular videos. In contrast to previous NeRF methods that reconstruct dynamic scenes from the colors and volume densities of individual points, Semantic Flow learns semantics from continuous flows that contain rich 3D motion information. As there is 2D-to-3D ambiguity problem in the viewing direction when extracting 3D flow features from 2D video frames, we consider the volume densities as opacity priors that describe the contributions of flow features to the semantics on the frames. More specifically, we first learn a flow network to predict flows in the dynamic scene, and propose a flow feature aggregation module to extract flow features from video frames. Then, we propose a flow attention module to extract motion information from flow features, which is followed by a semantic network to output semantic logits of flows. We integrate the logits with volume densities in the viewing direction to supervise the flow features with semantic labels on video frames. Experimental results show that our model is able to learn from multiple dynamic scenes and supports a series of new tasks such as instance-level scene editing, semantic completions, dynamic scene tracking and semantic adaption on novel scenes. Codes are available at https://github.com/tianfr/Semantic-Flow/.<br />Comment: Accepted by ICLR 2024, Codes are available at https://github.com/tianfr/Semantic-Flow/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.05163
Document Type :
Working Paper