Back to Search Start Over

Unsupervised Disentanglement of Pose, Appearance and Background from Images and Videos

Authors :
Dundar, Aysegul
Shih, Kevin J.
Garg, Animesh
Pottorf, Robert
Tao, Andrew
Catanzaro, Bryan
Publication Year :
2020

Abstract

Unsupervised landmark learning is the task of learning semantic keypoint-like representations without the use of expensive input keypoint-level annotations. A popular approach is to factorize an image into a pose and appearance data stream, then to reconstruct the image from the factorized components. The pose representation should capture a set of consistent and tightly localized landmarks in order to facilitate reconstruction of the input image. Ultimately, we wish for our learned landmarks to focus on the foreground object of interest. However, the reconstruction task of the entire image forces the model to allocate landmarks to model the background. This work explores the effects of factorizing the reconstruction task into separate foreground and background reconstructions, conditioning only the foreground reconstruction on the unsupervised landmarks. Our experiments demonstrate that the proposed factorization results in landmarks that are focused on the foreground object of interest. Furthermore, the rendered background quality is also improved, as the background rendering pipeline no longer requires the ill-suited landmarks to model its pose and appearance. We demonstrate this improvement in the context of the video-prediction task.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2001.09518
Document Type :
Working Paper