1. Large Language Model Augmentation and Feature Alignment Method for Few-Shot Continual Relation Extraction
- Author
-
LI Yifei, ZHANG Lingling, DONG Yuxuan, WANG Jiaxin, ZHONG Yujie, WEI Bifan
- Subjects
large language model (llm) ,relation extraction ,continual learning ,few-shot learning ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Relation extraction, as a key task in natural language processing, plays a significant role in deepening language understanding, constructing knowledge graphs, and optimizing information retrieval systems. However, traditional supervised learning methods are not well-suited for real-world scenarios due to the continuous emergence of new relations and the lack of large annotated datasets. Although the advent of large language models has significantly improved the performance of many natural language processing tasks, they still cannot effectively address the challenges of few-shot continual relation extraction. To fully leverage the semantic knowledge of large language models to mitigate catastrophic forgetting and overfitting issues, a novel few-shot continual relation extraction method, LAFA (large language model augmentation and feature alignment), is proposed. This method enhances representation alignment through various strategies such as relation instance rewriting, semantic expansion, and enhanced relation representation. It effectively improves the model adaptability to new relations and the retention of old knowledge while maintaining low data and computational costs. Experimental validation on two relation extraction datasets, FewRel and TACRED, demonstrates that LAFA outperforms existing methods in few-shot continual relation extraction tasks, particularly achieving the best results in incremental stages. Ablation experiments further reveal the significant contributions of each module to overall performance. Moreover, the inference efficiency and cost of LAFA are substantially lower than those of existing large language model-based methods, and it boasts strong scalability, being able to adapt to various language models.
- Published
- 2024
- Full Text
- View/download PDF