2006
DOI: 10.1016/j.ijpharm.2006.07.056
|View full text |Cite
|
Sign up to set email alerts
|

Performance comparison of neural network training algorithms in modeling of bimodal drug delivery

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
145
0
1

Year Published

2013
2013
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 196 publications
(146 citation statements)
references
References 28 publications
0
145
0
1
Order By: Relevance
“…This software has a graphical user interface (GUI) that supports different types of training algorithms. This enables the user to load the data sets, design the network structure, select the training algorithm, and generate the different models for each output variable in a single operation [49]. The networks were trained with incremental backpropagation as this has been reported to be the most commonly used algorithm [34].…”
Section: Ann Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…This software has a graphical user interface (GUI) that supports different types of training algorithms. This enables the user to load the data sets, design the network structure, select the training algorithm, and generate the different models for each output variable in a single operation [49]. The networks were trained with incremental backpropagation as this has been reported to be the most commonly used algorithm [34].…”
Section: Ann Analysismentioning
confidence: 99%
“…The predictive ability of the ANN methods is assessed on the basis of the mean square error (MSE), root mean squared error (RMSE), the coefficient of determination (R 2 ), and the absolute average deviation (AAD) [49] between the predicted values of the network and the actual values, which are calculated by Equations (4)- (7) as follows:…”
Section: Predictability Of Model Evaluated In Artificial Neural Netwomentioning
confidence: 99%
“…The BPXNC then reads the input and output values in the training dataset and changes the value of the weighted links to reduce the differentiation between the predicted and observed values. The error in prediction is reduced across several training cycles (epoch 50) until the network reaches the best level of classification accuracy while avoiding overfitting [23].…”
Section: Experimental Designmentioning
confidence: 99%
“…The yi is the net input to node j in hidden or output layer, the weight related to neuron i and neuron j are indicated as wij, xi is the input of neuron j, bj is the bias connected to node j (29). Sigmoidal transfer function is usuallyused for nonlinear relationship (30,31). The general form of this function is showed below (28):…”
Section: Ann Descriptionmentioning
confidence: 99%
“…Nevertheless, it is very hard to choose the number of hidden layers (30). Most of literatures indicate that one hidden layer is good enough to validate the prediction and maybe the best to decide for all applied feed-forward network design (38).…”
Section: Ann Modelingmentioning
confidence: 99%