Back to Search
Start Over
Neural scene representation and rendering.
- Source :
-
Science (New York, N.Y.) [Science] 2018 Jun 15; Vol. 360 (6394), pp. 1204-1210. - Publication Year :
- 2018
-
Abstract
- Scene representation-the process of converting visual sensory data into concise descriptions-is a requirement for intelligent behavior. Recent work has shown that neural networks excel at this task when provided with large, labeled datasets. However, removing the reliance on human labeling remains an important open problem. To this end, we introduce the Generative Query Network (GQN), a framework within which machines learn to represent scenes using only their own sensors. The GQN takes as input images of a scene taken from different viewpoints, constructs an internal representation, and uses this representation to predict the appearance of that scene from previously unobserved viewpoints. The GQN demonstrates representation learning without human labels or domain knowledge, paving the way toward machines that autonomously learn to understand the world around them.<br /> (Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.)
- Subjects :
- Machine Learning
Neural Networks, Computer
Vision, Ocular
Subjects
Details
- Language :
- English
- ISSN :
- 1095-9203
- Volume :
- 360
- Issue :
- 6394
- Database :
- MEDLINE
- Journal :
- Science (New York, N.Y.)
- Publication Type :
- Academic Journal
- Accession number :
- 29903970
- Full Text :
- https://doi.org/10.1126/science.aar6170