Back to Search
Start Over
Learning Markov Logic Networks via Functional Gradient Boosting
- Source :
- ICDM
- Publication Year :
- 2011
- Publisher :
- IEEE, 2011.
-
Abstract
- Recent years have seen a surge of interest in Statistical Relational Learning (SRL) models that combine logic with probabilities. One prominent example is Markov Logic Networks (MLNs). While MLNs are indeed highly expressive, this expressiveness comes at a cost. Learning MLNs is a hard problem and therefore has attracted much interest in the SRL community. Current methods for learning MLNs follow a two-step approach: first, perform a search through the space of possible clauses and then learn appropriate weights for these clauses. We propose to take a different approach, namely to learn both the weights and the structure of the MLN simultaneously. Our approach is based on functional gradient boosting where the problem of learning MLNs is turned into a series of relational functional approximation problems. We use two kinds of representations for the gradients: clause-based and tree-based. Our experimental evaluation on several benchmark data sets demonstrates that our new approach can learn MLNs as good or better than those found with state-of-the-art methods, but often in a fraction of the time.
- Subjects :
- Structure (mathematical logic)
Approximation theory
Markov chain
business.industry
Group method of data handling
Statistical relational learning
Markov process
Machine learning
computer.software_genre
Ensemble learning
symbols.namesake
symbols
Artificial intelligence
Gradient boosting
business
computer
Mathematics
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- 2011 IEEE 11th International Conference on Data Mining
- Accession number :
- edsair.doi...........3cd13a2f5e4582eb5d78c527bf9edb09