Back to Search Start Over

Low-complexity algorithm for restless bandits with imperfect observations.

Authors :
Liu, Keqin
Weber, Richard
Zhang, Chengzhong
Source :
Mathematical Methods of Operations Research; Oct2024, Vol. 100 Issue 2, p467-508, 42p
Publication Year :
2024

Abstract

We consider a class of restless bandit problems that finds a broad application area in reinforcement learning and stochastic optimization. We consider N independent discrete-time Markov processes, each of which had two possible states: 1 and 0 ('good' and 'bad'). Only if a process is both in state 1 and observed to be so does reward accrue. The aim is to maximize the expected discounted sum of returns over the infinite horizon subject to a constraint that only M (< N) processes may be observed at each step. Observation is error-prone: there are known probabilities that state 1 (0) will be observed as 0 (1). From this one knows, at any time t, a probability that process i is in state 1. The resulting system may be modeled as a restless multi-armed bandit problem with an information state space of uncountable cardinality. Restless bandit problems with even finite state spaces are PSPACE-HARD in general. We propose a novel approach for simplifying the dynamic programming equations of this class of restless bandits and develop a low-complexity algorithm that achieves a strong performance and is readily extensible to the general restless bandit model with observation errors. Under certain conditions, we establish the existence (indexability) of Whittle index and its equivalence to our algorithm. When those conditions do not hold, we show by numerical experiments the near-optimal performance of our algorithm in the general parametric space. Furthermore, we theoretically prove the optimality of our algorithm for homogeneous systems. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
14322994
Volume :
100
Issue :
2
Database :
Complementary Index
Journal :
Mathematical Methods of Operations Research
Publication Type :
Academic Journal
Accession number :
180268864
Full Text :
https://doi.org/10.1007/s00186-024-00868-x