Back to Search Start Over

Comparison of Open-Source and Proprietary LLMs for Machine Reading Comprehension: A Practical Analysis for Industrial Applications

Authors :
Alassan, Mahaman Sanoussi Yahaya
Espejel, Jessica López
Bouhandi, Merieme
Dahhane, Walid
Ettifouri, El Hassane
Publication Year :
2024

Abstract

Large Language Models (LLMs) have recently demonstrated remarkable performance in various Natural Language Processing (NLP) applications, such as sentiment analysis, content generation, and personalized recommendations. Despite their impressive capabilities, there remains a significant need for systematic studies concerning the practical application of LLMs in industrial settings, as well as the specific requirements and challenges related to their deployment in these contexts. This need is particularly critical for Machine Reading Comprehension (MCR), where factual, concise, and accurate responses are required. To date, most MCR rely on Small Language Models (SLMs) or Recurrent Neural Networks (RNNs) such as Long Short-Term Memory (LSTM). This trend is evident in the SQuAD2.0 rankings on the Papers with Code table. This article presents a comparative analysis between open-source LLMs and proprietary models on this task, aiming to identify light and open-source alternatives that offer comparable performance to proprietary models.<br />Comment: Preprint submitted to Natural Language Processing Journal

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.13713
Document Type :
Working Paper