2020
DOI: 10.1021/acs.jcim.9b01075
|View full text |Cite
|
Sign up to set email alerts
|

Ranking Molecules with Vanishing Kernels and a Single Parameter: Active Applicability DomainIncluded

Abstract: In ligand-based virtual screening, high-throughput screening (HTS) data sets can be exploited to train classification models. Such models can be used to prioritize yet untested molecules, from the most likely active (against a protein target of interest) to the least likely active. In this study, a single-parameter ranking method with an Applicability Domain (AD) is proposed. In effect, Kernel Density Estimates (KDE) are revisited to improve their computational efficiency and incorporate an AD. Two modificatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1

Relationship

4
0

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 70 publications
0
4
0
Order By: Relevance
“…To accelerate modeling and screening, the total number of molecules was capped to 65,000 (about 100 inactives per active molecule), using all actives but only a random partition from the inactive molecules. As machine-learning method, we used a prototype software developed in the lab using Kernel Density Estimate to rank-order molecules and at the same time obtain an applicability domain for the model ( Berenger and Yamanishi, 2020 ). The biweight kernel was used.…”
Section: Methodsmentioning
confidence: 99%
“…To accelerate modeling and screening, the total number of molecules was capped to 65,000 (about 100 inactives per active molecule), using all actives but only a random partition from the inactive molecules. As machine-learning method, we used a prototype software developed in the lab using Kernel Density Estimate to rank-order molecules and at the same time obtain an applicability domain for the model ( Berenger and Yamanishi, 2020 ). The biweight kernel was used.…”
Section: Methodsmentioning
confidence: 99%
“…The computational teams used a variety of machine learning [47], docking [48,49] and hybrid approaches (Figure 2). In the group of machine learning based methods, approaches included: reinforcement learning, random forests [50], gradient boosting [51][52][53], kernelbased methods -e. g., Vanishing Ranking Kernels [54] and deep learning methods -e. g., self-normalizing-networks [55], LSTMs [56], CNNs [57][58][59][60][61], geometric deep learning, and graph neural networks [62][63][64]. Also stochastic-based methods -e. g., Naive Bayes Classifier [65] and Self-Consistent Regression [66] -were used.…”
Section: Virtual Screening Using Computational Methodsmentioning
confidence: 99%
“…The computational teams used a variety of machine learning 33 , docking 34,35 and hybrid approaches (Figure 2). In the group of machine learning based methods, approaches included: reinforcement learning, random forests 36 , gradient boosting [37][38][39] , kernel-based methods-e.g., Vanishing Ranking Kernels 40 -and deep learning methods-e.g., self-normalizing-networks 41 , LSTMs 42 , CNNs [43][44][45][46][47] , geometric deep learning, and graph neural networks [48][49][50] . Also stochastic-based methods-e.g., Naive Bayes Classifier 51 and Self-Consistent Regression 52were used.…”
Section: Virtual Screening Using Computational Methodsmentioning
confidence: 99%