Back to Search Start Over

EXPLOITING SYNTACTIC, SEMANTIC, AND LEXICAL REGULARITIES IN LANGUAGE MODELING VIA DIRECTED MARKOV RANDOM FIELDS.

Authors :
Wang, Shaojun
Wang, Shaomin
Cheng, Li
Greiner, Russell
Schuurmans, Dale
Source :
Computational Intelligence. Nov2013, Vol. 29 Issue 4, p649-679. 31p.
Publication Year :
2013

Abstract

We present a directed Markov random field (MRF) model that combines w-gram models, probabilistic context-free grammars (PCFGs), and probabilistic latent semantic analysis (PLSA) for the purpose of statistical language modeling. Even though the composite directed MRF model potentially has an exponential number of loops and becomes a context-sensitive grammar, we are nevertheless able to estimate its parameters in cubic time using an efficient modified Expectation-Maximization (EM) method, the generalized inside-outside algorithm, which extends the inside-outside algorithm to incorporate the effects of the n-gram and PLSA language models. We generalize various smoothing techniques to alleviate the sparseness of w-gram counts in cases where there are hidden variables. We also derive an analogous algorithm to find the most likely parse of a sentence and to calculate the probability of initial subsequence of a sentence, all generated by the composite language model. Our experimental results on the Wall Street Journal corpus show that we obtain significant reductions in perplexity compared to the state-of-the-art baseline trigram model with Good-Turing and Kneser-Ney smoothing techniques. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
08247935
Volume :
29
Issue :
4
Database :
Academic Search Index
Journal :
Computational Intelligence
Publication Type :
Academic Journal
Accession number :
91936137
Full Text :
https://doi.org/10.1111/j.1467-8640.2012.00436.x