2018
DOI: 10.1111/cbdd.13206
|View full text |Cite
|
Sign up to set email alerts
|

Improving classical scoring functions using random forest: The non‐additivity of free energy terms’ contributions in binding

Abstract: Despite recent efforts to improve the scoring performance of scoring functions, accurately predicting the binding affinity is still a challenging task. Therefore, different approaches were tried to improve the prediction performance of four scoring functions (x-score, vina, autodock, and rf-score) by substituting the linear regression model of classical scoring function by random forest to examine the performance improvement if an additive functional form is not imposed, and by combining different scoring func… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(24 citation statements)
references
References 21 publications
0
24
0
Order By: Relevance
“…5. Further, to have a better understanding of the performance of our models, we have systematically compare our predictions with the state-of-the-art results in literatures, 1,4,15,15,19,21,22,40,41,50,55 as far as we known. The results are illustrated in Figs.…”
Section: Basic Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…5. Further, to have a better understanding of the performance of our models, we have systematically compare our predictions with the state-of-the-art results in literatures, 1,4,15,15,19,21,22,40,41,50,55 as far as we known. The results are illustrated in Figs.…”
Section: Basic Resultsmentioning
confidence: 99%
“…It has been found that our PerSpect ML model has delivered the best results in terms of both PCC and RMSE in all the three test cases. 1,4,15,15,19,21,22,40,41,50,55 2 Theory and methods…”
Section: Introductionmentioning
confidence: 99%
“…Compared to “classical” machine learning scoring functions, our method performs similarly to RF Score and other RF-based scoring functions [ 11 , 77 ]. Despite recent advances in deep learning architectures, which consistently outperform “classical” machine learning algorithms in image recognition and natural language processing [ 15 – 19 ], RFs remain very competitive for binding affinity predictions.…”
Section: Discussionmentioning
confidence: 99%
“…75 S3, together with references for all the different methods. 4,11,[20][21][22][23][24]51,63,64,[71][72][73][74] We also showed that the AEScore model presented here can be exploited in tandem with standard docking scoring functions using a ∆-learning approach, in order to improve the performance in docking and virtual screening (in which AEScore does not perform well).…”
Section: Supporting Informationmentioning
confidence: 90%
“…Compared to "classical" machine learning scoring functions, our method performs similarly to RF Score and other RF-based scoring functions. 11,73 Despite recent advances in deep learning architectures, which consistently outperform "classical" machine learning algorithms in image recognition and natural language processing, [15][16][17][18][19] RFs remain very competitive for binding affinity predictions. All top-performing machine learning and deep learning methods considered here achieve similar performance on the CASF benchmarks-as measured by…”
Section: Visualizationmentioning
confidence: 99%