Back to Search
Start Over
Asynchronous Parallel Policy Gradient Methods for the Linear Quadratic Regulator
- Publication Year :
- 2024
-
Abstract
- Learning policies in an asynchronous parallel way is essential to the numerous successes of RL for solving large-scale problems. However, their convergence performance is still not rigorously evaluated. To this end, we adopt the asynchronous parallel zero-order policy gradient (AZOPG) method to solve the continuous-time linear quadratic regulation problem. Specifically, as in the celebrated A3C algorithm, there are multiple parallel workers to asynchronously estimate PGs which are then sent to a central master for policy updates. Via quantifying its convergence rate of policy iterations, we show the linear speedup property of the AZOPG, both in theory and simulation, which clearly reveals the advantages of using parallel workers for learning policies.<br />Comment: This article was submitted to IEEE TAC on Jan. 10, 2024
- Subjects :
- Mathematics - Optimization and Control
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2407.03233
- Document Type :
- Working Paper