Back to Search
Start Over
Reinforcement Learning Design for Quickest Change Detection
- Publication Year :
- 2024
-
Abstract
- The field of quickest change detection (QCD) concerns design and analysis of algorithms to estimate in real time the time at which an important event takes place, and identify properties of the post-change behavior. It is shown in this paper that approaches based on reinforcement learning (RL) can be adapted based on any "surrogate information state" that is adapted to the observations. Hence we are left to choose both the surrogate information state process and the algorithm. For the former, it is argued that there are many choices available, based on a rich theory of asymptotic statistics for QCD. Two approaches to RL design are considered: (i) Stochastic gradient descent based on an actor-critic formulation. Theory is largely complete for this approach: the algorithm is unbiased, and will converge to a local minimum. However, it is shown that variance of stochastic gradients can be very large, necessitating the need for commensurately long run times; (ii) Q-learning algorithms based on a version of the projected Bellman equation. It is shown that the algorithm is stable, in the sense of bounded sample paths, and that a solution to the projected Bellman equation exists under mild conditions. Numerical experiments illustrate these findings, and provide a roadmap for algorithm design in more general settings.<br />Comment: Preprint version of "Reinforcement Learning Design for Quickest Change Detection", IEEE Conference on Decision and Control, 2024 (to appear)
- Subjects :
- Mathematics - Optimization and Control
Computer Science - Information Theory
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2403.14109
- Document Type :
- Working Paper