Water molecules in the active site of an enzyme occupy a complex, heterogeneous environment, and the thermodynamic properties of active-site water are functions of position. As a consequence, it is thought that an enzyme inhibitor can gain affinity by extending into a region occupied by unfavorable water or lose affinity by displacing water from a region where it was relatively stable. Recent advances in the characterization of binding-site water, based on the analysis of molecular simulations with explicit water molecules, have focused largely on simplified representations of water as occupying well-defined hydration sites. Our grid-based treatment of hydration, GIST, offers a more complete picture of the complex distributions of water properties, but it has not yet been applied to proteins. This first application of GIST to protein–ligand modeling, for the case of Coagulation Factor Xa, shows that ligand scoring functions based on GIST perform at least as well as scoring functions based on a hydration-site approach (HSA), when applied to exactly the same simulation data. Interestingly, the displacement of energetically unfavorable water emerges as the dominant factor in the fitted scoring functions, for both GIST and HSA methods, while water entropy plays a secondary role, at least in the present context.
Recently much effort has been invested in using convolutional neural network (CNN) models trained on 3D structural images of protein-ligand complexes to distinguish binding from non-binding ligands for virtual screening. However, the dearth of reliable protein-ligand x-ray structures and binding affinity data has required the use of constructed datasets for the training and evaluation of CNN molecular recognition models. Here, we outline various sources of bias in one such widely-used dataset, the Directory of Useful Decoys: Enhanced (DUD-E). We have constructed and performed tests to investigate whether CNN models developed using DUD-E are properly learning the underlying physics of molecular recognition, as intended, or are instead learning biases inherent in the dataset itself. We find that superior enrichment efficiency in CNN models can be attributed to the analogue and decoy bias hidden in the DUD-E dataset rather than successful generalization of the pattern of protein-ligand interactions. Comparing additional deep learning models trained on PDBbind datasets, we found that their enrichment performances using DUD-E are not superior to the performance of the docking program AutoDock Vina. Together, these results suggest that biases that could be present in constructed datasets should be thoroughly evaluated before applying them to machine learning based methodology development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.