Back to Search Start Over

ScienceWorld: Is your Agent Smarter than a 5th Grader?

Authors :
Wang, Ruoyao
Jansen, Peter
Côté, Marc-Alexandre
Ammanabrolu, Prithviraj
Publication Year :
2022

Abstract

We present ScienceWorld, a benchmark to test agents' scientific reasoning abilities in a new interactive text environment at the level of a standard elementary school science curriculum. Despite the transformer-based progress seen in question-answering and scientific text processing, we find that current models cannot reason about or explain learned science concepts in novel contexts. For instance, models can easily answer what the conductivity of a known material is but struggle when asked how they would conduct an experiment in a grounded environment to find the conductivity of an unknown material. This begs the question of whether current models are simply retrieving answers by way of seeing a large number of similar examples or if they have learned to reason about concepts in a reusable manner. We hypothesize that agents need to be grounded in interactive environments to achieve such reasoning capabilities. Our experiments provide empirical evidence supporting this hypothesis -- showing that a 1.5 million parameter agent trained interactively for 100k steps outperforms a 11 billion parameter model statically trained for scientific question-answering and reasoning from millions of expert demonstrations.<br />Comment: Accepted to EMNLP 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2203.07540
Document Type :
Working Paper