Back to Search Start Over

Neural Lyapunov and Optimal Control

Authors :
Layeghi, Daniel
Tonneau, Steve
Mistry, Michael
Publication Year :
2023

Abstract

Despite impressive results, reinforcement learning (RL) suffers from slow convergence and requires a large variety of tuning strategies. In this paper, we investigate the ability of RL algorithms on simple continuous control tasks. We show that without reward and environment tuning, RL suffers from poor convergence. In turn, we introduce an optimal control (OC) theoretic learning-based method that can solve the same problems robustly with simple parsimonious costs. We use the Hamilton-Jacobi-Bellman (HJB) and first-order gradients to learn optimal time-varying value functions and therefore, policies. We show the relaxation of our objective results in time-varying Lyapunov functions, further verifying our approach by providing guarantees over a compact set of initial conditions. We compare our method to Soft Actor Critic (SAC) and Proximal Policy Optimisation (PPO). In this comparison, we solve all tasks, we never underperform in task cost and we show that at the point of our convergence, we outperform SAC and PPO in the best case by 4 and 2 orders of magnitude.

Subjects

Subjects :
Computer Science - Robotics

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.15244
Document Type :
Working Paper