Back to Search Start Over

Protein stability prediction by fine-tuning a protein language model on a mega-scale dataset.

Authors :
Chu, Simon
Chu, Simon
Narang, Kush
Siegel, Justin
Chu, Simon
Chu, Simon
Narang, Kush
Siegel, Justin
Source :
PLoS Computational Biology; vol 20, iss 7
Publication Year :
2024

Abstract

Protein stability plays a crucial role in a variety of applications, such as food processing, therapeutics, and the identification of pathogenic mutations. Engineering campaigns commonly seek to improve protein stability, and there is a strong interest in streamlining these processes to enable rapid optimization of highly stabilized proteins with fewer iterations. In this work, we explore utilizing a mega-scale dataset to develop a protein language model optimized for stability prediction. ESMtherm is trained on the folding stability of 528k natural and de novo sequences derived from 461 protein domains and can accommodate deletions, insertions, and multiple-point mutations. We show that a protein language model can be fine-tuned to predict folding stability. ESMtherm performs reasonably on small protein domains and generalizes to sequences distal from the training set. Lastly, we discuss our models limitations compared to other state-of-the-art methods in generalizing to larger protein scaffolds. Our results highlight the need for large-scale stability measurements on a diverse dataset that mirrors the distribution of sequence lengths commonly observed in nature.

Details

Database :
OAIster
Journal :
PLoS Computational Biology; vol 20, iss 7
Notes :
application/pdf, PLoS Computational Biology vol 20, iss 7
Publication Type :
Electronic Resource
Accession number :
edsoai.on1452693944
Document Type :
Electronic Resource