Back to Search Start Over

BERTHop: An Effective Vision-and-Language Model for Chest X-ray Disease Diagnosis.

Authors :
Monajatipoor M
Rouhsedaghat M
Li LH
Kuo CJ
Chien A
Chang KW
Source :
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention [Med Image Comput Comput Assist Interv] 2022 Sep; Vol. 13435, pp. 725-734. Date of Electronic Publication: 2022 Sep 16.
Publication Year :
2022

Abstract

Vision-and-language (V&L) models take image and text as input and learn to capture the associations between them. These models can potentially deal with the tasks that involve understanding medical images along with their associated text. However, applying V&L models in the medical domain is challenging due to the expensiveness of data annotations and the requirements of domain knowledge. In this paper, we identify that the visual representation in general V&L models is not suitable for processing medical data. To overcome this limitation, we propose BERTHop, a transformer-based model based on PixelHop++ and VisualBERT for better capturing the associations between clinical notes and medical images. Experiments on the OpenI dataset, a commonly used thoracic disease diagnosis benchmark, show that BERTHop achieves an average Area Under the Curve (AUC) of 98.12% which is 1.62% higher than state-of-the-art while it is trained on a 9× smaller dataset.

Details

Language :
English
Volume :
13435
Database :
MEDLINE
Journal :
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
Publication Type :
Academic Journal
Accession number :
37093922
Full Text :
https://doi.org/10.1007/978-3-031-16443-9_69