Back to Search Start Over

Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines

Authors :
Kumar, Sreejan
Correa, Carlos G.
Dasgupta, Ishita
Marjieh, Raja
Hu, Michael Y.
Hawkins, Robert D.
Daw, Nathaniel D.
Cohen, Jonathan D.
Narasimhan, Karthik
Griffiths, Thomas L.
Publication Year :
2022

Abstract

Strong inductive biases give humans the ability to quickly learn to perform a variety of tasks. Although meta-learning is a method to endow neural networks with useful inductive biases, agents trained by meta-learning may sometimes acquire very different strategies from humans. We show that co-training these agents on predicting representations from natural language task descriptions and programs induced to generate such tasks guides them toward more human-like inductive biases. Human-generated language descriptions and program induction models that add new learned primitives both contain abstract concepts that can compress description length. Co-training on these representations result in more human-like behavior in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without learned primitives), suggesting that the abstraction supported by these representations is key.<br />Comment: In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS 2022), winner of Outstanding Paper Award

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2205.11558
Document Type :
Working Paper