151. Document-level relation extraction with entity mentions deep attention.
- Author
-
Xu, Yangsheng, Tian, Jiaxin, Tang, Mingwei, Tao, Linping, and Wang, Liuxuan
- Subjects
- *
AUTOMATIC summarization , *SEMANTICS , *ARTIFICIAL intelligence , *INFORMATION theory , *LANGUAGE & languages - Abstract
Document-level Relation Extraction(DocRE) aims to extract relations between entities from documents. In contrast to sentence-level relation extraction, it requires extracting semantic relations from multiple sentences. It is necessary to further improve the performance of the above algorithm in order to extract document-level relation. Therefore, the DocRE algorithms have to deal with more complex entity structure relationships and the need to unite semantic relationships between different sentences when reasoning about relationships between entities. The proposed algorithms fail to infer relationships between entities when dealing with complex entity structure relationships. In this paper, we propose an entity mentions deep attention framework that efficiently infers entity relationships through entity structure and contextual information. Firstly, a structural dependency module of entities is designed to achieve interaction between different mentions of the entity. Secondly, a deep contextual attention component proposed to enrich the semantic information between entities by entity-related contexts. Finally, we use a distance mapping component to solve the problem of entity pairs that are far away from each other. According to our implementation results, our model outperforms the state-ofthe-art models on three public datasets DocRED, DGA, and CDR. • We created dependencies for entities and various mentions to enable structural reasoning. • The deep contextual attention module is designed to enhance context semantic information and improve model reasoning. • A location mapping module enhances entity representations for accurate capture of remote entity relationships. • The experimental results demonstrate the superiority of our EMDA model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF