Back to Search
Start Over
Using Augmented Small Multimodal Models to Guide Large Language Models for Multimodal Relation Extraction.
- Source :
- Applied Sciences (2076-3417); Nov2023, Vol. 13 Issue 22, p12208, 14p
- Publication Year :
- 2023
-
Abstract
- Multimodal Relation Extraction (MRE) is a core task for constructing Multimodal Knowledge images (MKGs). Most current research is based on fine-tuning small-scale single-modal image and text pre-trained models, but we find that image-text datasets from network media suffer from data scarcity, simple text data, and abstract image information, which requires a lot of external knowledge for supplementation and reasoning. We use Multimodal Relation Data augmentation (MRDA) to address the data scarcity problem in MRE, and propose a Flexible Threshold Loss (FTL) to handle the imbalanced entity pair distribution and long-tailed classes. After obtaining prompt information from the small model as a guide model, we employ a Large Language Model (LLM) as a knowledge engine to acquire common sense and reasoning abilities. Notably, both stages of our framework are flexibly replaceable, with the first stage adapting to multimodal related classification tasks for small models, and the second stage replaceable by more powerful LLMs. Through experiments, our EMRE2llm model framework achieves state-of-the-art performance on the challenging MNRE dataset, reaching an 82.95% F1 score on the test set. [ABSTRACT FROM AUTHOR]
- Subjects :
- LANGUAGE models
DATA augmentation
DOCUMENT imaging systems
COMMON sense
Subjects
Details
- Language :
- English
- ISSN :
- 20763417
- Volume :
- 13
- Issue :
- 22
- Database :
- Complementary Index
- Journal :
- Applied Sciences (2076-3417)
- Publication Type :
- Academic Journal
- Accession number :
- 173828315
- Full Text :
- https://doi.org/10.3390/app132212208