Back to Search Start Over

Inferring Compact Representations for Efficient Natural Language Understanding of Robot Instructions

Authors :
Siddharth Patki
Andrea F. Daniele
Matthew R. Walter
Thomas M. Howard
Source :
ICRA
Publication Year :
2019

Abstract

The speed and accuracy with which robots are able to interpret natural language is fundamental to realizing effective human-robot interaction. A great deal of attention has been paid to developing models and approximate inference algorithms that improve the efficiency of language understanding. However, existing methods still attempt to reason over a representation of the environment that is flat and unnecessarily detailed, which limits scalability. An open problem is then to develop methods capable of producing the most compact environment model sufficient for accurate and efficient natural language understanding. We propose a model that leverages environment-related information encoded within instructions to identify the subset of observations and perceptual classifiers necessary to perceive a succinct, instruction-specific environment representation. The framework uses three probabilistic graphical models trained from a corpus of annotated instructions to infer salient scene semantics, perceptual classifiers, and grounded symbols. Experimental results on two robots operating in different environments demonstrate that by exploiting the content and the structure of the instructions, our method learns compact environment representations that significantly improve the efficiency of natural language symbol grounding.<br />Accepted to ICRA 2019

Details

Language :
English
Database :
OpenAIRE
Journal :
ICRA
Accession number :
edsair.doi.dedup.....92db00150c78ba3c2d1b7cb794027111