1. Large Language Models versus Classical Machine Learning: Performance in COVID-19 Mortality Prediction Using High-Dimensional Tabular Data
- Author
-
Ghaffarzadeh-Esfahani, Mohammadreza, Ghaffarzadeh-Esfahani, Mahdi, Salahi-Niri, Arian, Toreyhi, Hossein, Atf, Zahra, Mohsenzadeh-Kermani, Amirali, Sarikhani, Mahshad, Tajabadi, Zohreh, Shojaeian, Fatemeh, Bagheri, Mohammad Hassan, Feyzi, Aydin, Tarighatpayma, Mohammadamin, Gazmeh, Narges, Heydari, Fateme, Afshar, Hossein, Allahgholipour, Amirreza, Alimardani, Farid, Salehi, Ameneh, Asadimanesh, Naghmeh, Khalafi, Mohammad Amin, Shabanipour, Hadis, Moradi, Ali, Zadeh, Sajjad Hossein, Yazdani, Omid, Esbati, Romina, Maleki, Moozhan, Nasr, Danial Samiei, Soheili, Amirali, Majlesi, Hossein, Shahsavan, Saba, Soheilipour, Alireza, Goudarzi, Nooshin, Taherifard, Erfan, Hatamabadi, Hamidreza, Samaan, Jamil S, Savage, Thomas, Sakhuja, Ankit, Soroush, Ali, Nadkarni, Girish, Darazam, Ilad Alavi, Pourhoseingholi, Mohamad Amin, and Safavi-Naini, Seyed Amir Ahmad
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language ,92C50, 68T50 ,J.3 - Abstract
Background: This study aimed to evaluate and compare the performance of classical machine learning models (CMLs) and large language models (LLMs) in predicting mortality associated with COVID-19 by utilizing a high-dimensional tabular dataset. Materials and Methods: We analyzed data from 9,134 COVID-19 patients collected across four hospitals. Seven CML models, including XGBoost and random forest (RF), were trained and evaluated. The structured data was converted into text for zero-shot classification by eight LLMs, including GPT-4 and Mistral-7b. Additionally, Mistral-7b was fine-tuned using the QLoRA approach to enhance its predictive capabilities. Results: Among the CML models, XGBoost and RF achieved the highest accuracy, with F1 scores of 0.87 for internal validation and 0.83 for external validation. In the LLM category, GPT-4 was the top performer with an F1 score of 0.43. Fine-tuning Mistral-7b significantly improved its recall from 1% to 79%, resulting in an F1 score of 0.74, which was stable during external validation. Conclusion: While LLMs show moderate performance in zero-shot classification, fine-tuning can significantly enhance their effectiveness, potentially aligning them closer to CML models. However, CMLs still outperform LLMs in high-dimensional tabular data tasks., Comment: Code is available at: https://github.com/mohammad-gh009/Large-Language-Models-vs-Classical-Machine-learning and https://github.com/Sdamirsa/Tehran_COVID_Cohort. The datasets are available from the corresponding author on reasonable request (sdamirsa@ymail.com)
- Published
- 2024