Back to Search
Start Over
Interpretable pairwise distillations for generative protein sequence models.
- Source :
- PLoS Computational Biology; 6/23/2022, Vol. 18 Issue 6, p1-20, 20p, 1 Diagram, 1 Chart, 7 Graphs
- Publication Year :
- 2022
-
Abstract
- Many different types of generative models for protein sequences have been proposed in literature. Their uses include the prediction of mutational effects, protein design and the prediction of structural properties. Neural network (NN) architectures have shown great performances, commonly attributed to the capacity to extract non-trivial higher-order interactions from the data. In this work, we analyze two different NN models and assess how close they are to simple pairwise distributions, which have been used in the past for similar problems. We present an approach for extracting pairwise models from more complex ones using an energy-based modeling framework. We show that for the tested models the extracted pairwise models can replicate the energies of the original models and are also close in performance in tasks like mutational effect prediction. In addition, we show that even simpler, factorized models often come close in performance to the original models. Author summary: Complex neural networks trained on large biological datasets have recently shown powerful capabilites in tasks like the prediction of protein structure, assessing the effect of mutations on the fitness of proteins and even designing completely novel proteins with desired characteristics. The enthralling prospect of leveraging these advances in fields like medicine and synthetic biology has created a large amount of interest in academic research and industry. The connected question of what biological insights these methods actually gain during training has, however, received less attention. In this work, we systematically investigate in how far neural networks capture information that could not be captured by simpler models. To this end, we develop a method to train simpler models to imitate more complex models, and compare their performance to the original neural network models. Surprisingly, we find that the simpler models thus trained often perform on par with the neural networks, while having a considerably easier structure. This highlights the importance of finding ways to interpret the predictions of neural networks in these fields, which could inform the creation of better models, improve methods for their assessment and ultimately also increase our understanding of the underlying biology. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 1553734X
- Volume :
- 18
- Issue :
- 6
- Database :
- Complementary Index
- Journal :
- PLoS Computational Biology
- Publication Type :
- Academic Journal
- Accession number :
- 157611590
- Full Text :
- https://doi.org/10.1371/journal.pcbi.1010219