2020
DOI: 10.1021/acs.jcim.9b01212
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Binding from Screening Assays with Transformer Network Embeddings

Abstract: Cheminformatics aims to assist in chemistry applications that depend on molecular interactions, structural characteristics, and functional properties. The arrival of deep learning and the abundance of easily accessible chemical data from repositories like PubChem have enabled advancements in computer-aided drug discovery. Virtual High-Throughput Screening (vHTS) is one such technique that integrates chemical domain knowledge to perform in silico biomolecular simulations, but prediction of binding affinity is r… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 34 publications
(18 citation statements)
references
References 60 publications
0
18
0
Order By: Relevance
“…We applied the pretrained end-to-end transformer to 83,000,000 SMILES collected from PubChem to obtain the structural characteristic embedding vectors of each drug using the end-to-end transformer deep neural network based on their frequency and sequential order of each SMILES character. An encoder layer with self-attention operation mapped the SMILES sequence into the latent space based on its relationship with other characters [ 25 ]. A decoder layer had similar structures to encoder layers, and the output of the final decoder layer was the same as the input sequence ( Figure 2(b) ).…”
Section: Methodsmentioning
confidence: 99%
“…We applied the pretrained end-to-end transformer to 83,000,000 SMILES collected from PubChem to obtain the structural characteristic embedding vectors of each drug using the end-to-end transformer deep neural network based on their frequency and sequential order of each SMILES character. An encoder layer with self-attention operation mapped the SMILES sequence into the latent space based on its relationship with other characters [ 25 ]. A decoder layer had similar structures to encoder layers, and the output of the final decoder layer was the same as the input sequence ( Figure 2(b) ).…”
Section: Methodsmentioning
confidence: 99%
“…• drugs MTE : 512-dimensional Molecular Transformer Embeddings (MTEs) [47], fed into fully connected drug subnetworks.…”
Section: Testing the Impact Of Different Methodological Variablesmentioning
confidence: 99%
“…To our best knowledge, it is one of the first attempts to utilize Transformer-like models as sole predictors of the binding affinity. Worth mentioning here a recent paper by Morris et al [27] adopting Transformer approach for the affinity prediction, however, their setup is limited to single receptor task, thus embeddings are learned for ligand SMILES only.…”
Section: Methodsmentioning
confidence: 99%