Back to Search Start Over

Toward Universal and Interpretable World Models for Open-ended Learning Agents

Authors :
Da Costa, Lancelot
Source :
NeurIPS 2024 Workshop on Intrinsically Motivated Open-ended Learning (IMOL)
Publication Year :
2024

Abstract

We introduce a generic, compositional and interpretable class of generative world models that supports open-ended learning agents. This is a sparse class of Bayesian networks capable of approximating a broad range of stochastic processes, which provide agents with the ability to learn world models in a manner that may be both interpretable and computationally scalable. This approach integrating Bayesian structure learning and intrinsically motivated (model-based) planning enables agents to actively develop and refine their world models, which may lead to developmental learning and more robust, adaptive behavior.<br />Comment: 4 pages including appendix, 6 including appendix and references; 2 figures

Details

Database :
arXiv
Journal :
NeurIPS 2024 Workshop on Intrinsically Motivated Open-ended Learning (IMOL)
Publication Type :
Report
Accession number :
edsarx.2409.18676
Document Type :
Working Paper