Back to Search
Start Over
Q-Learning Algorithms with Random Truncation Bounds and Applications to Effective Parallel Computing.
- Source :
-
Journal of Optimization Theory & Applications . May2008, Vol. 137 Issue 2, p435-451. 17p. 1 Chart. - Publication Year :
- 2008
-
Abstract
- Motivated by an important problem of load balancing in parallel computing, this paper examines a modified algorithm to enhance Q-learning methods, especially in asynchronous recursive procedures for self-adaptive load distribution at runtime. Unlike the existing projection method that utilizes a fixed region, our algorithm employs a sequence of growing truncation bounds to ensure the boundedness of the iterates. Convergence and rates of convergence of the proposed algorithm are established. This class of algorithms has broad applications in signal processing, learning, financial engineering, and other related fields. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 00223239
- Volume :
- 137
- Issue :
- 2
- Database :
- Academic Search Index
- Journal :
- Journal of Optimization Theory & Applications
- Publication Type :
- Academic Journal
- Accession number :
- 32919497
- Full Text :
- https://doi.org/10.1007/s10957-007-9331-9