1. Model-Free Trajectory-based Policy Optimization with Monotonic Improvement.
- Author
-
Akrour, Riad, Abdolmaleki, Abbas, Abdulsamad, Hany, Peters, Jan, and Neumann, Gerhard
- Subjects
- *
SYSTEM dynamics , *MATHEMATICAL optimization , *APPROXIMATION theory , *REINFORCEMENT learning , *ALGORITHMS - Abstract
Many of the recent trajectory optimization algorithms alternate between linear approximation of the system dynamics around the mean trajectory and conservative policy update. One way of constraining the policy change is by bounding the Kullback-Leibler (KL) divergence between successive policies. These approaches already demonstrated great experimental success in challenging problems such as end-to-end control of physical systems. However, the linear approximation of the system dynamics can introduce a bias in the policy update and prevent convergence to the optimal policy. In this article, we propose a new model-free trajectory-based policy optimization algorithm with guaranteed monotonic improvement. The algorithm backpropagates a local, quadratic and time-dependent Q-Function learned from trajectory data instead of a model of the system dynamics. Our policy update ensures exact KL-constraint satisfaction without simplifying assumptions on the system dynamics. We experimentally demonstrate on highly non-linear control tasks the improvement in performance of our algorithm in comparison to approaches linearizing the system dynamics. In order to show the monotonic improvement of our algorithm, we additionally conduct a theoretical analysis of our policy update scheme to derive a lower bound of the change in policy return between successive iterations. [ABSTRACT FROM AUTHOR]
- Published
- 2018