Back to Search Start Over

Tensor and Matrix Low-Rank Value-Function Approximation in Reinforcement Learning

Authors :
Rozada, Sergio
Paternain, Santiago
Marques, Antonio G.
Publication Year :
2022

Abstract

Value-function (VF) approximation is a central problem in Reinforcement Learning (RL). Classical non-parametric VF estimation suffers from the curse of dimensionality. As a result, parsimonious parametric models have been adopted to approximate VFs in high-dimensional spaces, with most efforts being focused on linear and neural-network-based approaches. Differently, this paper puts forth a a parsimonious non-parametric approach, where we use stochastic low-rank algorithms to estimate the VF matrix in an online and model-free fashion. Furthermore, as VFs tend to be multi-dimensional, we propose replacing the classical VF matrix representation with a tensor (multi-way array) representation and, then, use the PARAFAC decomposition to design an online model-free tensor low-rank algorithm. Different versions of the algorithms are proposed, their complexity is analyzed, and their performance is assessed numerically using standardized RL environments.<br />Comment: 13 pages, 6 figures, 2 table

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2201.09736
Document Type :
Working Paper