Back to Search Start Over

FlauBERT vs. CamemBERT: Understanding patient's answers by a French medical chatbot.

Authors :
Blanc, Corentin
Bailly, Alexandre
Francis, Élie
Guillotin, Thierry
Jamal, Fadi
Wakim, Béchara
Roy, Pascal
Source :
Artificial Intelligence in Medicine. May2022, Vol. 127, pN.PAG-N.PAG. 1p.
Publication Year :
2022

Abstract

In a number of circumstances, obtaining health-related information from a patient is time-consuming, whereas a chatbot interacting efficiently with that patient might help saving health care professional time and better assisting the patient. Making a chatbot understand patients' answers uses Natural Language Understanding (NLU) technology that relies on 'intent' and 'slot' predictions. Over the last few years, language models (such as BERT) pre-trained on huge amounts of data achieved state-of-the-art intent and slot predictions by connecting a neural network architecture (e.g., linear, recurrent, long short-term memory, or bidirectional long short-term memory) and fine-tuning all language model and neural network parameters end-to-end. Currently, two language models are specialized in French language: FlauBERT and CamemBERT. This study was designed to find out which combination of language model and neural network architecture was the best for intent and slot prediction by a chatbot from a French corpus of clinical cases. The comparisons showed that FlauBERT performed better than CamemBERT whatever the network architecture used and that complex architectures did not significantly improve performance vs. simple ones whatever the language model. Thus, in the medical field, the results support recommending FlauBERT with a simple linear network architecture. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09333657
Volume :
127
Database :
Academic Search Index
Journal :
Artificial Intelligence in Medicine
Publication Type :
Academic Journal
Accession number :
156286404
Full Text :
https://doi.org/10.1016/j.artmed.2022.102264