Back to Search Start Over

Knowledge-Enhanced Prompt Learning for Few-Shot Text Classification.

Authors :
Liu, Jinshuo
Yang, Lu
Source :
Big Data & Cognitive Computing; Apr2024, Vol. 8 Issue 4, p43, 12p
Publication Year :
2024

Abstract

Classification methods based on fine-tuning pre-trained language models often require a large number of labeled samples; therefore, few-shot text classification has attracted considerable attention. Prompt learning is an effective method for addressing few-shot text classification tasks in low-resource settings. The essence of prompt tuning is to insert tokens into the input, thereby converting a text classification task into a masked language modeling problem. However, constructing appropriate prompt templates and verbalizers remains challenging, as manual prompts often require expert knowledge, while auto-constructing prompts is time-consuming. In addition, the extensive knowledge contained in entities and relations should not be ignored. To address these issues, we propose a structured knowledge prompt tuning (SKPT) method, which is a knowledge-enhanced prompt tuning approach. Specifically, SKPT includes three components: prompt template, prompt verbalizer, and training strategies. First, we insert virtual tokens into the prompt template based on open triples to introduce external knowledge. Second, we use an improved knowledgeable verbalizer to expand and filter the label words. Finally, we use structured knowledge constraints during the training phase to optimize the model. Through extensive experiments on few-shot text classification tasks with different settings, the effectiveness of our model has been demonstrated. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
25042289
Volume :
8
Issue :
4
Database :
Complementary Index
Journal :
Big Data & Cognitive Computing
Publication Type :
Academic Journal
Accession number :
176878935
Full Text :
https://doi.org/10.3390/bdcc8040043