2015
DOI: 10.1186/s13321-015-0074-6
|View full text |Cite
|
Sign up to set email alerts
|

Applying DEKOIS 2.0 in structure-based virtual screening to probe the impact of preparation procedures and score normalization

Abstract: BackgroundStructure-based virtual screening techniques can help to identify new lead structures and complement other screening approaches in drug discovery. Prior to docking, the data (protein crystal structures and ligands) should be prepared with great attention to molecular and chemical details.ResultsUsing a subset of 18 diverse targets from the recently introduced DEKOIS 2.0 benchmark set library, we found differences in the virtual screening performance of two popular docking tools (GOLD and Glide) when … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
34
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(34 citation statements)
references
References 50 publications
0
34
0
Order By: Relevance
“…Although we do not claim that our collection is comprehensive, there are no commonly used, state-of-the-art evaluation data sets available. Standardized benchmark sets are accessible for broadly applied modeling approaches such as pharmacophore searches [61] and molecular docking [16,62]. In contrast, the high diversity of the applied benchmark sets for binding site comparison makes it difficult to draw definitive conclusions in comparing the different tools.…”
Section: Benchmark Data Setsmentioning
confidence: 99%
See 1 more Smart Citation
“…Although we do not claim that our collection is comprehensive, there are no commonly used, state-of-the-art evaluation data sets available. Standardized benchmark sets are accessible for broadly applied modeling approaches such as pharmacophore searches [61] and molecular docking [16,62]. In contrast, the high diversity of the applied benchmark sets for binding site comparison makes it difficult to draw definitive conclusions in comparing the different tools.…”
Section: Benchmark Data Setsmentioning
confidence: 99%
“…Usually, published binding site comparison algorithms have been benchmarked using specific data sets, which are highly correlated with distinct application domains. However, standardized benchmark data sets, as known for other in silico methodologies [14][15][16], have never been developed for cavity comparison tools. This often precludes the selection of a suitable tool.…”
Section: Introductionmentioning
confidence: 99%
“…New databases were designed with an increasing complexity in the decoys selection methodologies (see section Benchmarking Databases). Nowadays, benchmarking databases are widely used to evaluate various VS tools (Kellenberger et al, 2004 ; Warren et al, 2006 ; McGaughey et al, 2007 ; von Korff et al, 2009 ; Braga and Andrade, 2013 ; Ibrahim et al, 2015a ; Pei et al, 2015 ) and to support the identification of hit/lead compounds using LBVS and SBVS (Allen et al, 2015 ; Ruggeri et al, 2015 ).…”
Section: The History Of Decoys Selectionmentioning
confidence: 99%
“…On the contrary, the possible presence of active compounds in the decoy compounds set may introduce an artificial underestimation of the enrichment (Verdonk et al, 2004 ; Good and Oprea, 2008 ) since decoys are usually assumed to be inactive rather than proved to be true inactive compounds (i.e., confirmed inactive through experimental bioassays). New databases were designed to minimize those biases (Rohrer and Baumann, 2009 ; Vogel et al, 2011 ; Mysinger et al, 2012 ; Ibrahim et al, 2015a ). Finally, many studies pointed out that the VS performance depends on the target and its structural properties (structural flexibility, binding site physicochemical properties, etc.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation