1. The StreetLearn Environment and Dataset
- Author
-
Mirowski, Piotr, Banki-Horvath, Andras, Anderson, Keith, Teplyashin, Denis, Hermann, Karl Moritz, Malinowski, Mateusz, Grimes, Matthew Koichi, Simonyan, Karen, Kavukcuoglu, Koray, Zisserman, Andrew, and Hadsell, Raia
- Subjects
Computer Science - Artificial Intelligence ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Robotics - Abstract
Navigation is a rich and well-grounded problem domain that drives progress in many different areas of research: perception, planning, memory, exploration, and optimisation in particular. Historically these challenges have been separately considered and solutions built that rely on stationary datasets - for example, recorded trajectories through an environment. These datasets cannot be used for decision-making and reinforcement learning, however, and in general the perspective of navigation as an interactive learning task, where the actions and behaviours of a learning agent are learned simultaneously with the perception and planning, is relatively unsupported. Thus, existing navigation benchmarks generally rely on static datasets (Geiger et al., 2013; Kendall et al., 2015) or simulators (Beattie et al., 2016; Shah et al., 2018). To support and validate research in end-to-end navigation, we present StreetLearn: an interactive, first-person, partially-observed visual environment that uses Google Street View for its photographic content and broad coverage, and give performance baselines for a challenging goal-driven navigation task. The environment code, baseline agent code, and the dataset are available at http://streetlearn.cc, Comment: 13 pages, 6 figures, 4 tables. arXiv admin note: text overlap with arXiv:1804.00168
- Published
- 2019