Back to Search
Start Over
Self-distillation framework for document- level relation extraction in low-resource environments.
- Source :
- PeerJ Computer Science; Mar2024, p1-23, 23p
- Publication Year :
- 2024
-
Abstract
- The objective of document-level relation extraction is to retrieve the relations existing between entities within a document. Currently, deep learning methods have demonstrated superior performance in document-level relation extraction tasks. However, to enhance the model's performance, various methods directly introduce additional modules into the backbone model, which often increases the number of parameters in the overall model. Consequently, deploying these deep models in resource-limited environments presents a challenge. In this article, we introduce a self-distillation framework for document-level relational extraction. We partition the document-level relation extraction model into two distinct modules, namely, the entity embedding representation module and the entity pair embedding representation module. Subsequently, we apply separate distillation techniques to each module to reduce the model's size. In order to evaluate the proposed framework's performance, two benchmark datasets for document-level relation extraction, namely GDA and DocRED are used in this study. The results demonstrate that our model effectively enhances performance and significantly reduces the model's size. [ABSTRACT FROM AUTHOR]
- Subjects :
- DEEP learning
DISTILLATION
Subjects
Details
- Language :
- English
- ISSN :
- 23765992
- Database :
- Complementary Index
- Journal :
- PeerJ Computer Science
- Publication Type :
- Academic Journal
- Accession number :
- 176567911
- Full Text :
- https://doi.org/10.7717/peerj-cs.1930