Back to Search Start Over

Improved GNNs for Log  D 7.4 Prediction by Transferring Knowledge from Low-Fidelity Data.

Authors :
Duan YJ
Fu L
Zhang XC
Long TZ
He YH
Liu ZQ
Lu AP
Deng YF
Hsieh CY
Hou TJ
Cao DS
Source :
Journal of chemical information and modeling [J Chem Inf Model] 2023 Apr 24; Vol. 63 (8), pp. 2345-2359. Date of Electronic Publication: 2023 Mar 31.
Publication Year :
2023

Abstract

The n -octanol/buffer solution distribution coefficient at pH = 7.4 (log  D <subscript>7.4</subscript> ) is an indicator of lipophilicity, and it influences a wide variety of absorption, distribution, metabolism, excretion, and toxicity (ADMET) properties and druggability of compounds. In log  D <subscript>7.4</subscript> prediction, graph neural networks (GNNs) can uncover subtle structure-property relationships (SPRs) by automatically extracting features from molecular graphs that facilitate the learning of SPRs, but their performances are often limited by the small size of available datasets. Herein, we present a transfer learning strategy called pretraining on computational data and then fine-tuning on experimental data (PCFE) to fully exploit the predictive potential of GNNs. PCFE works by pretraining a GNN model on 1.71 million computational log  D data (low-fidelity data) and then fine-tuning it on 19,155 experimental log  D <subscript>7.4</subscript> data (high-fidelity data). The experiments for three GNN architectures (graph convolutional network (GCN), graph attention network (GAT), and Attentive FP) demonstrated the effectiveness of PCFE in improving GNNs for log  D <subscript>7.4</subscript> predictions. Moreover, the optimal PCFE-trained GNN model (cx-Attentive FP, R <subscript>test</subscript> <superscript>2</superscript> = 0.909) outperformed four excellent descriptor-based models (random forest (RF), gradient boosting (GB), support vector machine (SVM), and extreme gradient boosting (XGBoost)). The robustness of the cx-Attentive FP model was also confirmed by evaluating the models with different training data sizes and dataset splitting strategies. Therefore, we developed a webserver and defined the applicability domain for this model. The webserver (http://tools.scbdd.com/chemlogd/) provides free log  D <subscript>7.4</subscript> prediction services. In addition, the important descriptors for log  D <subscript>7.4</subscript> were detected by the Shapley additive explanations (SHAP) method, and the most relevant substructures of log  D <subscript>7.4</subscript> were identified by the attention mechanism. Finally, the matched molecular pair analysis (MMPA) was performed to summarize the contributions of common chemical substituents to log  D <subscript>7.4</subscript> , including a variety of hydrocarbon groups, halogen groups, heteroatoms, and polar groups. In conclusion, we believe that the cx-Attentive FP model can serve as a reliable tool to predict log  D <subscript>7.4</subscript> and hope that pretraining on low-fidelity data can help GNNs make accurate predictions of other endpoints in drug discovery.

Details

Language :
English
ISSN :
1549-960X
Volume :
63
Issue :
8
Database :
MEDLINE
Journal :
Journal of chemical information and modeling
Publication Type :
Academic Journal
Accession number :
37000044
Full Text :
https://doi.org/10.1021/acs.jcim.2c01564