Back to Search Start Over

Effective Estimation of Deep Generative Language Models

Authors :
Tom Pelsmaeker
Wilker Aziz
Source :
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL
Publisher :
Association for Computational Linguistics

Abstract

Advances in variational inference enable parameterisation of probabilistic models by deep neural networks. This combines the statistical transparency of the probabilistic modelling framework with the representational power of deep learning. Yet, due to a problem known as posterior collapse, it is difficult to estimate such models in the context of language modelling effectively. We concentrate on one such model, the variational auto-encoder, which we argue is an important building block in hierarchical probabilistic models of language. This paper contributes a sober view of the problem, a survey of techniques to address it, novel techniques, and extensions to the model. To establish a ranking of techniques, we perform a systematic comparison using Bayesian optimisation and find that many techniques perform reasonably similar, given enough resources. Still, a favourite can be named based on convenience. We also make several empirical observations and recommendations of best practices that should help researchers interested in this exciting field.<br />Comment: Published in ACL 2020

Details

Language :
English
Database :
OpenAIRE
Journal :
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL
Accession number :
edsair.doi.dedup.....05c4e4ac789bcc607f171faa0555cf6e
Full Text :
https://doi.org/10.18653/v1/2020.acl-main.646