1. A hierarchical approach to efficient reinforcement learning in deterministic domains
- Author
-
Michael L. Littman, Alexander L. Strehl, and Carlos Diuk
- Subjects
Polynomial ,Hierarchy ,Theoretical computer science ,Learning classifier system ,Computer science ,Sample complexity ,business.industry ,Unsupervised learning ,Reinforcement learning ,State (computer science) ,Artificial intelligence ,business ,Abstraction (linguistics) - Abstract
Factored representations, model-based learning, and hierarchies are well-studied techniques for improving the learning efficiency of reinforcement-learning algorithms in large-scale state spaces. We bring these three ideas together in a new algorithm. Our algorithm tackles two open problems from the reinforcement-learning literature, and provides a solution to those problems in deterministic domains. First, it shows how models can improve learning speed in the hierarchy-based MaxQ framework without disrupting opportunities for state abstraction. Second, we show how hierarchies can augment existing factored exploration algorithms to achieve not only low sample complexity for learning, but provably efficient planning as well. We illustrate the resulting performance gains in example domains. We prove polynomial bounds on the computational effort needed to attain near optimal performance within the hierarchy.
- Published
- 2006
- Full Text
- View/download PDF