Back to Search Start Over

WordTransABSA: Enhancing Aspect-based Sentiment Analysis with masked language modeling for affective token prediction.

Authors :
Jin, Weiqiang
Zhao, Biao
Zhang, Yu
Huang, Jia
Yu, Hang
Source :
Expert Systems with Applications. Mar2024:Part F, Vol. 238, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

In recent years, Aspect-based Sentiment Analysis (ABSA) has been a crucial yet challenging task in recognizing textual emotions from text. ABSA has numerous application across various fields, such as social media, commodity review, and movie comment, making it an attractive area of research. Many researchers are working to develop more powerful sentiment analysis models. Currently, most existing ABSA models use the generic pre-trained language models (PLMs) based fine-tuning paradigm, which only utilizes the encoder parameters while discarding the decoder parameters of PLMs. However, this approach fails to leverage the prior knowledge revealed in PLMs effectively. To address these issues, we investigate the potential of the initial pre-training scheme of PLMs to conduct ABSA and thus propose a novel approach in this paper, namely Target Word Transferred ABSA (WordTransABSA). In WordTransABSA, we propose "Word Transferred LM", a novel sequence-level optimization strategy that transferred target words in sentence into pivot tokens to stimulate better PLM semantic understanding capability. Given a sentence with aspect terms as input, WordTransABSA generates contextually appropriate semantics and predicts the affective tokens on the corresponding positions of the aspect terms. The final sentiment polarity of each aspect term is determined through several sentiment identification strategies that we selected. WordTransABSA takes full advantage of the versatile linguistic knowledge of Pre-trained Language Model, resulting in competitive accuracy compared with recent baselines. The WordTransABSA demonstrates its superiority and effectiveness through extensive experiments in both data-sufficient (full-data supervised learning) and data-insufficient (few-shot learning) scenarios. We have made our code publicly available on GitHub: https://github.com/albert-jin/WordTransABSA. • We present WordTransABSA, introducing a "Word Transferred LM" strategy for ABSA. • Various strategies are explored for searching sentiment-related pivot tokens. • Data-sufficient/scarce experiments validate the effectiveness of WordTransABSA. • Stimulating PLM pre-training paradigm is proved to be a better solution for ABSA. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09574174
Volume :
238
Database :
Academic Search Index
Journal :
Expert Systems with Applications
Publication Type :
Academic Journal
Accession number :
173694068
Full Text :
https://doi.org/10.1016/j.eswa.2023.122289