Back to Search Start Over

Linguistically inspired roadmap for building biologically reliable protein language models

Authors :
Vu, Mai Ha
Akbar, Rahmad
Robert, Philippe A.
Swiatczak, Bartlomiej
Greiff, Victor
Sandve, Geir Kjetil
Haug, Dag Trygve Truslew
Source :
Nat Mach Intell (2023)
Publication Year :
2022

Abstract

Deep neural-network-based language models (LMs) are increasingly applied to large-scale protein sequence data to predict protein function. However, being largely black-box models and thus challenging to interpret, current protein LM approaches do not contribute to a fundamental understanding of sequence-function mappings, hindering rule-based biotherapeutic drug development. We argue that guidance drawn from linguistics, a field specialized in analytical rule extraction from natural language data, can aid with building more interpretable protein LMs that are more likely to learn relevant domain-specific rules. Differences between protein sequence data and linguistic sequence data require the integration of more domain-specific knowledge in protein LMs compared to natural language LMs. Here, we provide a linguistics-based roadmap for protein LM pipeline choices with regard to training data, tokenization, token embedding, sequence embedding, and model interpretation. Incorporating linguistic ideas into protein LMs enables the development of next-generation interpretable machine-learning models with the potential of uncovering the biological mechanisms underlying sequence-function relationships.<br />Comment: 27 pages, 4 figures

Details

Database :
arXiv
Journal :
Nat Mach Intell (2023)
Publication Type :
Report
Accession number :
edsarx.2207.00982
Document Type :
Working Paper
Full Text :
https://doi.org/10.1038/s42256-023-00637-1