Back to Search Start Over

Gradient-based boosting for statistical relational learning: The relational dependency network case

Authors :
Jude W. Shavlik
Kristian Kersting
Bernd Gutmann
Tushar Khot
Sriraam Natarajan
Publica
Source :
Machine Learning. 86:25-56
Publication Year :
2011
Publisher :
Springer Science and Business Media LLC, 2011.

Abstract

Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches. ispartof: Machine Learning vol:86 issue:1 pages:25-56 ispartof: location:Florence, Italy status: published

Details

ISSN :
15730565 and 08856125
Volume :
86
Database :
OpenAIRE
Journal :
Machine Learning
Accession number :
edsair.doi.dedup.....9d288f9da1c9b05bfaa28e2c52faf3aa
Full Text :
https://doi.org/10.1007/s10994-011-5244-9