1. Classification of Radiological Text in Small and Imbalanced Datasets in a Non-English Language
- Author
-
Beliveau, Vincent, Kaas, Helene, Prener, Martin, Ladefoged, Claes N., Elliott, Desmond, Knudsen, Gitte M., Pinborg, Lars H., and Ganz, Melanie
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Natural language processing (NLP) in the medical domain can underperform in real-world applications involving small datasets in a non-English language with few labeled samples and imbalanced classes. There is yet no consensus on how to approach this problem. We evaluated a set of NLP models including BERT-like transformers, few-shot learning with sentence transformers (SetFit), and prompted large language models (LLM), using three datasets of radiology reports on magnetic resonance images of epilepsy patients in Danish, a low-resource language. Our results indicate that BERT-like models pretrained in the target domain of radiology reports currently offer the optimal performances for this scenario. Notably, the SetFit and LLM models underperformed compared to BERT-like models, with LLM performing the worst. Importantly, none of the models investigated was sufficiently accurate to allow for text classification without any supervision. However, they show potential for data filtering, which could reduce the amount of manual labeling required.
- Published
- 2024