Back to Search Start Over

Model-Free Risk-Sensitive Reinforcement Learning

Authors :
Delétang, Grégoire
Grau-Moya, Jordi
Kunesch, Markus
Genewein, Tim
Brekelmans, Rob
Legg, Shane
Ortega, Pedro A.
Publication Year :
2021

Abstract

We extend temporal-difference (TD) learning in order to obtain risk-sensitive, model-free reinforcement learning algorithms. This extension can be regarded as modification of the Rescorla-Wagner rule, where the (sigmoidal) stimulus is taken to be either the event of over- or underestimating the TD target. As a result, one obtains a stochastic approximation rule for estimating the free energy from i.i.d. samples generated by a Gaussian distribution with unknown mean and variance. Since the Gaussian free energy is known to be a certainty-equivalent sensitive to the mean and the variance, the learning rule has applications in risk-sensitive decision-making.<br />Comment: DeepMind Tech Report: 13 pages, 4 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2111.02907
Document Type :
Working Paper