Back to Search Start Over

Bayesian Exploration Networks

Authors :
Fellows, Mattie
Kaplowitz, Brandon
de Witt, Christian Schroeder
Whiteson, Shimon
Fellows, Mattie
Kaplowitz, Brandon
de Witt, Christian Schroeder
Whiteson, Shimon
Publication Year :
2023

Abstract

Bayesian reinforcement learning (RL) offers a principled and elegant approach for sequential decision making under uncertainty. Most notably, Bayesian agents do not face an exploration/exploitation dilemma, a major pathology of frequentist methods. However theoretical understanding of model-free approaches is lacking. In this paper, we introduce a novel Bayesian model-free formulation and the first analysis showing that model-free approaches can yield Bayes-optimal policies. We show all existing model-free approaches make approximations that yield policies that can be arbitrarily Bayes-suboptimal. As a first step towards model-free Bayes optimality, we introduce the Bayesian exploration network (BEN) which uses normalising flows to model both the aleatoric uncertainty (via density estimation) and epistemic uncertainty (via variational inference) in the Bellman operator. In the limit of complete optimisation, BEN learns true Bayes-optimal policies, but like in variational expectation-maximisation, partial optimisation renders our approach tractable. Empirical results demonstrate that BEN can learn true Bayes-optimal policies in tasks where existing model-free approaches fail.<br />Comment: ICML 2024 Version Update

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438474077
Document Type :
Electronic Resource