Back to Search Start Over

Applications of Reinforcement Learning to Structured Prediction.

Authors :
Maes, Francis
Denoyer, Ludovic
Gallinari, Patrick
Source :
Recent Advances in Reinforcement Learning (9783540897217); 2008, p205-219, 15p
Publication Year :
2008

Abstract

Supervised learning is about learning functions given a set of input and corresponding output examples. A recent trend in this field is to consider structured outputs such as sequences, trees or graphs. When predicting such structured data, learning models have to select solutions within very large discrete spaces. The combinatorial nature of this problem has recently led to learning models integrating a search component. In this paper, we show that Structured Prediction (SP) can be seen as a sequential decision problem. We introduce SP-MDP: a Markov Decision Process based formulation of Structured Prediction. Learning the optimal policy in SP-MDP is shown to be equivalent as solving the SP problem. This allows us to apply classical Reinforcement Learning (RL) algorithms to SP. We present experiments on two tasks. The first, sequence labeling, has been extensively studied and allows us to compare the RL approach with traditional SP methods. The second, tree transformation, is a challenging SP task with numerous large-scale real-world applications. We show successful results with general RL algorithms on this task on which traditional SP models fail. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISBNs :
9783540897217
Database :
Complementary Index
Journal :
Recent Advances in Reinforcement Learning (9783540897217)
Publication Type :
Book
Accession number :
76727941
Full Text :
https://doi.org/10.1007/978-3-540-89722-4_16