Back to Search Start Over

The complexity of non-stationary reinforcement learning

Authors :
Papadimitriou, Christos
Peng, Binghui
Publication Year :
2023

Abstract

The problem of continual learning in the domain of reinforcement learning, often called non-stationary reinforcement learning, has been identified as an important challenge to the application of reinforcement learning. We prove a worst-case complexity result, which we believe captures this challenge: Modifying the probabilities or the reward of a single state-action pair in a reinforcement learning problem requires an amount of time almost as large as the number of states in order to keep the value function up to date, unless the strong exponential time hypothesis (SETH) is false; SETH is a widely accepted strengthening of the P $\neq$ NP conjecture. Recall that the number of states in current applications of reinforcement learning is typically astronomical. In contrast, we show that just $\textit{adding}$ a new state-action pair is considerably easier to implement.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2307.06877
Document Type :
Working Paper