2007
DOI: 10.1002/qsar.200610001
|View full text |Cite
|
Sign up to set email alerts
|

Modeling of the Inhibition Constant (Ki) of Some Cruzain Ketone‐Based Inhibitors Using 2D Spatial Autocorrelation Vectors and Data‐Diverse Ensembles of Bayesian‐Regularized Genetic Neural Networks

Abstract: The inhibition constant (K i ) of a set of 46 ketone-based cruzain inhibitors against cysteine protease cruzain was successfully modeled by means of data-diverse ensembles of Bayesian-regularized genetic neural networks. 2D spatial autocorrelation vectors were used for encoding structural information yielding a nonlinear model describing about 90 and 75% of ensemble training and test set variances, respectively. From the results of a ranking analysis of the neural network inputs, it was derived that atomic van… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2007
2007
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 31 publications
(18 citation statements)
references
References 47 publications
0
18
0
Order By: Relevance
“…The BRANN models are a class of multilayer perceptron networks with high prediction power and reproducibility of results [30]. More details about this class of networks can be found in literature [31,32]. The number of neurons in hidden layer has been optimized using cross-validation methodology and the optimum number was 2.…”
Section: Resultsmentioning
confidence: 99%
“…The BRANN models are a class of multilayer perceptron networks with high prediction power and reproducibility of results [30]. More details about this class of networks can be found in literature [31,32]. The number of neurons in hidden layer has been optimized using cross-validation methodology and the optimum number was 2.…”
Section: Resultsmentioning
confidence: 99%
“…Bayesian regularized artificial neural networks (BRANNs) have the potential to give models which are relatively independent of neural network architecture above a minimum architecture [26,27]. The Bayesian regularization estimates the number of effective parameters which are lower than the number of weights.…”
Section: Bayesian Regularized Genetic Neural Network (Brgnns)mentioning
confidence: 99%
“…Differently to other GA-based approach, the objective of our algorithm is not to obtain a sole optimum model but a reduced population of well-fitted models, with MSE lower than a threshold MSE value, at which the Bayesian regularization guaranties network to posses good generalization abilities [47]. This is because we used MSE of data training fitting instead of cross-validation or test set MSE values as cost function and therefore the optimum model cannot be directly derived from the best-fitted model yielded by the genetic search.…”
Section: Genetic Algorithmmentioning
confidence: 99%
“…This process also assures to avoid chance correlations. This approach have shown to be highly efficient in comparison with cross-validation-based GA approach since only optimum models, according to the Bayesian regularization, are cross-validated at the end of the routine and not all the model generated throughout all the search process [47].…”
Section: Genetic Algorithmmentioning
confidence: 99%
See 1 more Smart Citation