Back to Search
Start Over
Hybrid MDP based integrated hierarchical Q-learning
- Source :
- SCIENCE CHINA Information Sciences; November 2011, Vol. 54 Issue: 11 p2279-2294, 16p
- Publication Year :
- 2011
-
Abstract
- Abstract: As a widely used reinforcement learning method, Q-learning is bedeviled by the curse of dimensionality: The computational complexity grows dramatically with the size of state-action space. To combat this difficulty, an integrated hierarchical Q-learning framework is proposed based on the hybrid Markov decision process (MDP) using temporal abstraction instead of the simple MDP. The learning process is naturally organized into multiple levels of learning, e.g., quantitative (lower) level and qualitative (upper) level, which are modeled as MDP and semi-MDP (SMDP), respectively. This hierarchical control architecture constitutes a hybrid MDP as the model of hierarchical Q-learning, which bridges the two levels of learning. The proposed hierarchical Q-learning can scale up very well and speed up learning with the upper level learning process. Hence this approach is an effective integral learning and control scheme for complex problems. Several experiments are carried out using a puzzle problem in a gridworld environment and a navigation control problem for a mobile robot. The experimental results demonstrate the effectiveness and efficiency of the proposed approach.
Details
- Language :
- English
- ISSN :
- 1674733X and 18691919
- Volume :
- 54
- Issue :
- 11
- Database :
- Supplemental Index
- Journal :
- SCIENCE CHINA Information Sciences
- Publication Type :
- Periodical
- Accession number :
- ejs25676037
- Full Text :
- https://doi.org/10.1007/s11432-011-4332-6