Back to Search Start Over

MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection

Authors :
Matero, Matthew
Soni, Nikita
Balasubramanian, Niranjan
Schwartz, H. Andrew
Publication Year :
2021

Abstract

Much of natural language processing is focused on leveraging large capacity language models, typically trained over single messages with a task of predicting one or more tokens. However, modeling human language at higher-levels of context (i.e., sequences of messages) is under-explored. In stance detection and other social media tasks where the goal is to predict an attribute of a message, we have contextual data that is loosely semantically connected by authorship. Here, we introduce Message-Level Transformer (MeLT) -- a hierarchical message-encoder pre-trained over Twitter and applied to the task of stance prediction. We focus on stance prediction as a task benefiting from knowing the context of the message (i.e., the sequence of previous messages). The model is trained using a variant of masked-language modeling; where instead of predicting tokens, it seeks to generate an entire masked (aggregated) message vector via reconstruction loss. We find that applying this pre-trained masked message-level transformer to the downstream task of stance detection achieves F1 performance of 67%.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2109.08113
Document Type :
Working Paper