Back to Search Start Over

Asynchronous Heterogeneous Linear Quadratic Regulator Design

Authors :
Toso, Leonardo F.
Wang, Han
Anderson, James
Publication Year :
2024

Abstract

We address the problem of designing an LQR controller in a distributed setting, where M similar but not identical systems share their locally computed policy gradient (PG) estimates with a server that aggregates the estimates and computes a controller that, on average, performs well on all systems. Learning in a distributed setting has the potential to offer statistical benefits - multiple datasets can be leveraged simultaneously to produce more accurate policy gradient estimates. However, the interplay of heterogeneous trajectory data and varying levels of local computational power introduce bias to the aggregated PG descent direction, and prevents us from fully exploiting the parallelism in the distributed computation. The latter stems from synchronous aggregation, where straggler systems negatively impact the runtime. To address this, we propose an asynchronous policy gradient algorithm for LQR control design. By carefully controlling the "staleness" in the asynchronous aggregation, we show that the designed controller converges to each system's $\epsilon$-near optimal controller up to a heterogeneity bias. Furthermore, we prove that our asynchronous approach obtains exact local convergence at a sub-linear rate.<br />Comment: Leonardo F. Toso and Han Wang contributed equally to this work

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.09061
Document Type :
Working Paper