2023
DOI: 10.26434/chemrxiv-2023-5cz7s
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Online triplet contrastive learning enables efficient cliff awareness in molecular activity prediction

Abstract: Predicting molecular activity and quantitative structure-activity relationship (QSAR) is important for drug discovery and optimization. With molecular structures as frames, graph neural networks (GNNs) are suited for activity prediction but tend to overlook activity-cliffs (ACs) where structurally-similar molecules have vastly different activity values. To address this, we introduced a new online triplet contrastive learning framework ACANet that incorporates a unique activity-cliff-awareness (ACA) loss functi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…For a biased model, it may correctly predict the binding affinity based on the incorrect model. Noteworthy, contrastive learning has showcased competitive results in tasks like small molecule property prediction, sequences-based prediction of drug–target interactions, similarity-based virtual screening, reaction classification, and enzyme function prediction . Contrastive learning can be applied in a multimodal setting, which enhances the learning of joint representations from varied modalities and thus bolsters model’s performance. , Given those considerations, we have therefore employed contrastive learning to reduce the embedding distance of both protein–ligand representation modalities and expand the distance from incorrect poses in a shared latent space, thereby improving the protein–ligand representation.…”
Section: Introductionmentioning
confidence: 99%
“…For a biased model, it may correctly predict the binding affinity based on the incorrect model. Noteworthy, contrastive learning has showcased competitive results in tasks like small molecule property prediction, sequences-based prediction of drug–target interactions, similarity-based virtual screening, reaction classification, and enzyme function prediction . Contrastive learning can be applied in a multimodal setting, which enhances the learning of joint representations from varied modalities and thus bolsters model’s performance. , Given those considerations, we have therefore employed contrastive learning to reduce the embedding distance of both protein–ligand representation modalities and expand the distance from incorrect poses in a shared latent space, thereby improving the protein–ligand representation.…”
Section: Introductionmentioning
confidence: 99%
“… Winkler and Le (2017) investigated how deep neural networks can address challenges posed by ACs in QSAR datasets. Shen et al 2023 introduced a new online triplet contrastive learning framework (ACANet) to enable efficient AC awareness in molecular activity prediction. Yin et al (2022a ) proposed AFSE to improve generalization performance and reduce the impact of ACs on bioactivity prediction.…”
Section: Introductionmentioning
confidence: 99%