Back to Search Start Over

The Goldilocks paradigm: comparing classical machine learning, large language models, and few-shot learning for drug discovery applications.

Authors :
Snyder SH
Vignaux PA
Ozalp MK
Gerlach J
Puhl AC
Lane TR
Corbett J
Urbina F
Ekins S
Source :
Communications chemistry [Commun Chem] 2024 Jun 12; Vol. 7 (1), pp. 134. Date of Electronic Publication: 2024 Jun 12.
Publication Year :
2024

Abstract

Recent advances in machine learning (ML) have led to newer model architectures including transformers (large language models, LLMs) showing state of the art results in text generation and image analysis as well as few-shot learning (FSLC) models which offer predictive power with extremely small datasets. These new architectures may offer promise, yet the 'no-free lunch' theorem suggests that no single model algorithm can outperform at all possible tasks. Here, we explore the capabilities of classical (SVR), FSLC, and transformer models (MolBART) over a range of dataset tasks and show a 'goldilocks zone' for each model type, in which dataset size and feature distribution (i.e. dataset "diversity") determines the optimal algorithm strategy. When datasets are small ( < 50 molecules), FSLC tend to outperform both classical ML and transformers. When datasets are small-to-medium sized (50-240 molecules) and diverse, transformers outperform both classical models and few-shot learning. Finally, when datasets are of larger and of sufficient size, classical models then perform the best, suggesting that the optimal model to choose likely depends on the dataset available, its size and diversity. These findings may help to answer the perennial question of which ML algorithm is to be used when faced with a new dataset.<br /> (© 2024. The Author(s).)

Details

Language :
English
ISSN :
2399-3669
Volume :
7
Issue :
1
Database :
MEDLINE
Journal :
Communications chemistry
Publication Type :
Academic Journal
Accession number :
38866916
Full Text :
https://doi.org/10.1038/s42004-024-01220-4