1. Robust off-policy Reinforcement Learning via Soft Constrained Adversary
- Author
-
Nakanishi, Kosuke, Kubo, Akihiro, Yasui, Yuji, and Ishii, Shin
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
Recently, robust reinforcement learning (RL) methods against input observation have garnered significant attention and undergone rapid evolution due to RL's potential vulnerability. Although these advanced methods have achieved reasonable success, there have been two limitations when considering adversary in terms of long-term horizons. First, the mutual dependency between the policy and its corresponding optimal adversary limits the development of off-policy RL algorithms; although obtaining optimal adversary should depend on the current policy, this has restricted applications to off-policy RL. Second, these methods generally assume perturbations based only on the $L_p$-norm, even when prior knowledge of the perturbation distribution in the environment is available. We here introduce another perspective on adversarial RL: an f-divergence constrained problem with the prior knowledge distribution. From this, we derive two typical attacks and their corresponding robust learning frameworks. The evaluation of robustness is conducted and the results demonstrate that our proposed methods achieve excellent performance in sample-efficient off-policy RL., Comment: 33 pages, 12 figures, 2 tables
- Published
- 2024