Back to Search Start Over

Learning relational options for inductive transfer in relational reinforcement learning

Authors :
Maurice Bruynooghe
Kurt Driessens
Tom Croonenborghs
Blockeel, H
Ramon, J
Shavlik, J
Tadepalli, P
Source :
Inductive Logic Programming ISBN: 9783540784685, ILP
Publication Year :
2008
Publisher :
Springer, 2008.

Abstract

In reinforcement learning problems, an agent has the task of learning a good or optimal strategy from interaction with his environment. At the start of the learning task, the agent usually has very little information. Therefore, when faced with complex problems that have a large state space, learning a good strategy might be infeasible or too slow to work in practice. One way to overcome this problem, is the use of guidance to supply the agent with traces of “reasonable policies”. However, in a lot of cases it will be hard for the user to supply such a policy. In this paper, we will investigate the use of transfer learning in Relational Reinforcement Learning. The goal of transfer learning is to accelerate learning on a target task after training on a different, but related, source task. More specifically, we introduce an extension of the options framework to the relational setting and show how one can learn skills that can be transferred across similar, but different domains. We present experiments showing the possible benefits of using relational options for transfer learning. ispartof: pages:88-97 ispartof: Lecture Notes in Computer Science vol:4894 pages:88-97 ispartof: The 17th International Conference on Inductive Logic Programming (ILP) location:Corvallis, Oregon date:19 Jun - 21 Jun 2007 status: published

Details

Language :
English
ISBN :
978-3-540-78468-5
ISBNs :
9783540784685
Database :
OpenAIRE
Journal :
Inductive Logic Programming ISBN: 9783540784685, ILP
Accession number :
edsair.doi.dedup.....f429843ffa1f25d756d9b0c1d2cd4cc7