2023
DOI: 10.1101/2023.09.17.558145
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

PSICHIC: physicochemical graph neural network for learning protein-ligand interaction fingerprints from sequence data

Huan Yee Koh,
Anh T.N. Nguyen,
Shirui Pan
et al.

Abstract: In drug discovery, determining the binding affinity and functional effects of small-molecule ligands on proteins is critical. Current computational methods can predict these protein-ligand interaction properties but often lose accuracy without high-resolution protein structures and falter in predicting functional effects. We introduce PSICHIC (PhySIcoCHemICal graph neural network), a framework uniquely incorporating physicochemical constraints to decode interaction fingerprints directly from sequence data alon… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 62 publications
0
4
0
Order By: Relevance
“…Compared with structure-based and complex-based methods, the model shows comparable performances across all metrics with lower standard deviations. Furthermore, the model performs on par with other more sophisticated state-of-the-art methods, namely PSICHIC [58] and TankBind. It is worth noting that PSICHIC also leverages the residue-level embeddings extracted from the pre-trained ESM; however, Koh et al [58] use these embeddings to construct 2D graphs of proteins.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared with structure-based and complex-based methods, the model shows comparable performances across all metrics with lower standard deviations. Furthermore, the model performs on par with other more sophisticated state-of-the-art methods, namely PSICHIC [58] and TankBind. It is worth noting that PSICHIC also leverages the residue-level embeddings extracted from the pre-trained ESM; however, Koh et al [58] use these embeddings to construct 2D graphs of proteins.…”
Section: Resultsmentioning
confidence: 99%
“…Furthermore, the model performs on par with other more sophisticated state-of-the-art methods, namely PSICHIC [58] and TankBind. It is worth noting that PSICHIC also leverages the residue-level embeddings extracted from the pre-trained ESM; however, Koh et al [58] use these embeddings to construct 2D graphs of proteins. This does not preserve the SE(3)-symmetry (rotations and translations), which is an important property in learning three-dimensional structures.…”
Section: Resultsmentioning
confidence: 99%
“…The incorporation of these diverse datasets forms a comprehensive and varied tested, allowing us to thoroughly assess the predictive capabilities of our multimodal representation when estimating protein–ligand binding affinities. To ensure a fair and standardized evaluation, we meticulously followed the test/training/validation split settings as outlined in previous studies, specifically adhering to the configurations defined in the respective sources for the DAVIS, KIBA, and PDBbind version 2020 datasets [ 15 , 43 ]. By maintaining this consistency, we aimed to create a level playing field for comparisons, allowing for an equitable assessment of our multimodal representation’s performance.…”
Section: Methodsmentioning
confidence: 99%
“…The incorporation of these diverse datasets forms a comprehensive and varied tested, allowing us to thoroughly assess the predictive capabilities of our multimodal representation when estimating proteinligand binding affinities. To ensure a fair and standardized evaluation, we meticulously followed the test/training/validation split settings as outlined in previous studies, specifically adhering to the configurations defined in the respective sources for the DAVIS, KIBA, and PDBbind v2020 datasets [57, 43]. By maintaining this consistency, we aimed to create a level playing field for comparisons, allowing for an equitable assessment of our multimodal representation’s performance.…”
Section: Methodsmentioning
confidence: 99%