Back to Search Start Over

Hierarchical Deep Reinforcement Learning for Continuous Action Control.

Authors :
Yang, Zhaoyang
Merrick, Kathryn
Jin, Lianwen
Abbass, Hussein A.
Source :
IEEE Transactions on Neural Networks & Learning Systems; Nov2018, Vol. 29 Issue 11, p5174-5184, 11p
Publication Year :
2018

Abstract

Robotic control in a continuous action space has long been a challenging topic. This is especially true when controlling robots to solve compound tasks, as both basic skills and compound skills need to be learned. In this paper, we propose a hierarchical deep reinforcement learning algorithm to learn basic skills and compound skills simultaneously. In the proposed algorithm, compound skills and basic skills are learned by two levels of hierarchy. In the first level of hierarchy, each basic skill is handled by its own actor, overseen by a shared basic critic. Then, in the second level of hierarchy, compound skills are learned by a meta critic by reusing basic skills. The proposed algorithm was evaluated on a Pioneer 3AT robot in three different navigation scenarios with fully observable tasks. The simulations were built in Gazebo 2 in a robot operating system Indigo environment. The results show that the proposed algorithm can learn both high performance basic skills and compound skills through the same learning process. The compound skills learned outperform those learned by a discrete action space deep reinforcement learning algorithm. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
REINFORCEMENT learning
ALGORITHMS

Details

Language :
English
ISSN :
2162237X
Volume :
29
Issue :
11
Database :
Complementary Index
Journal :
IEEE Transactions on Neural Networks & Learning Systems
Publication Type :
Periodical
Accession number :
132477971
Full Text :
https://doi.org/10.1109/TNNLS.2018.2805379