Back to Search Start Over

Occlusion Aware Unsupervised Learning of Optical Flow From Video

Authors :
Li, Jianfeng
Zhao, Junqiao
Feng, Tiantian
Ye, Chen
Xiong, Lu
Publication Year :
2020

Abstract

In this paper, we proposed an unsupervised learning method for estimating the optical flow between video frames, especially to solve the occlusion problem. Occlusion is caused by the movement of an object or the movement of the camera, defined as when certain pixels are visible in one video frame but not in adjacent frames. Due to the lack of pixel correspondence between frames in the occluded area, incorrect photometric loss calculation can mislead the optical flow training process. In the video sequence, we found that the occlusion in the forward ($t\rightarrow t+1$) and backward ($t\rightarrow t-1$) frame pairs are usually complementary. That is, pixels that are occluded in subsequent frames are often not occluded in the previous frame and vice versa. Therefore, by using this complementarity, a new weighted loss is proposed to solve the occlusion problem. In addition, we calculate gradients in multiple directions to provide richer supervision information. Our method achieves competitive optical flow accuracy compared to the baseline and some supervised methods on KITTI 2012 and 2015 benchmarks. This source code has been released at https://github.com/jianfenglihg/UnOpticalFlow.git.<br />Comment: 6 pages, 5 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2003.01960
Document Type :
Working Paper