Back to Search
Start Over
Towards Neural Machine Translation with Latent Tree Attention
- Publication Year :
- 2017
-
Abstract
- Building models that take advantage of the hierarchical structure of language without a priori annotation is a longstanding goal in natural language processing. We introduce such a model for the task of machine translation, pairing a recurrent neural network grammar encoder with a novel attentional RNNG decoder and applying policy gradient reinforcement learning to induce unsupervised tree structures on both the source and target. When trained on character-level datasets with no explicit segmentation or parse annotation, the model learns a plausible segmentation and shallow parse, obtaining performance close to an attentional baseline.<br />Comment: Presented at SPNLP 2017
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.1709.01915
- Document Type :
- Working Paper